CN114911590A - Task scheduling method and device, computer equipment and readable storage medium - Google Patents

Task scheduling method and device, computer equipment and readable storage medium Download PDF

Info

Publication number
CN114911590A
CN114911590A CN202210369286.5A CN202210369286A CN114911590A CN 114911590 A CN114911590 A CN 114911590A CN 202210369286 A CN202210369286 A CN 202210369286A CN 114911590 A CN114911590 A CN 114911590A
Authority
CN
China
Prior art keywords
component
service
task
node
pool
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210369286.5A
Other languages
Chinese (zh)
Inventor
杨真
李航
陈杨
吕素珍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Asset Management Co Ltd
Original Assignee
Ping An Asset Management Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Asset Management Co Ltd filed Critical Ping An Asset Management Co Ltd
Priority to CN202210369286.5A priority Critical patent/CN114911590A/en
Publication of CN114911590A publication Critical patent/CN114911590A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to the field of computer operation and maintenance, and discloses a task scheduling method, a task scheduling device, a computer device and a readable storage medium, wherein the method comprises the following steps: receiving a task request sent by a user side, acquiring a service component corresponding to the task request from a preset component pool, and setting the service component as a target component; receiving the arrangement information sent by the user side, wherein the arrangement information is a directed graph with at least one unexecuted node used for representing the target components, the unexecuted node is used for representing the target components, and the directed graph is used for reflecting the dependency relationship among the target components; and sequentially executing the target components corresponding to the unexecuted nodes according to the directed graph to obtain a task result, and sending the task result to the user side. The invention realizes the selection of the target components and the personalized customization of the dependency relationship among the target components, meets the calling and dependency requirements of different users on the service components, further meets the diversified requirements of the user side, and enlarges the application range.

Description

Task scheduling method and device, computer equipment and readable storage medium
Technical Field
The invention relates to the technical field of computer operation and maintenance, in particular to a task scheduling method and device, computer equipment and a readable storage medium.
Background
In the field of data analysis and processing, the processing of data typically requires a breakdown into multiple tasks, which are then managed by specialized systems for task orchestration and scheduling.
However, the inventor finds that the current task scheduling and scheduling can only call the service components in the data analysis system according to the call logic provided in advance in the data analysis system, so that the user terminal cannot acquire and schedule the service components according to the user's own requirements to obtain the task results required by the user terminal, and the current data analysis system cannot meet the diversified requirements of the user terminal.
Disclosure of Invention
The invention aims to provide a task scheduling method, a task scheduling device, a computer device and a readable storage medium, which are used for solving the problem that the current data analysis system cannot meet diversified requirements of a user terminal because a service component cannot be acquired and scheduled according to the self requirement to obtain a task result required by the user terminal in the prior art.
In order to achieve the above object, the present invention provides a task scheduling method, including:
receiving a task request sent by a user side, acquiring a service component corresponding to the task request from a preset component pool, and setting the service component as a target component, wherein at least one service component is stored in the component pool;
receiving the arrangement information sent by the user side, wherein the arrangement information is a directed graph with at least one unexecuted node used for representing the target components, and the directed graph is used for reflecting the dependency relationship between the target components;
and sequentially executing a plurality of target assemblies according to the directed graph to obtain a task result, and sending the task result to the user side.
In the above solution, before receiving the task request sent by the user side, the method further includes:
the method comprises the steps of constructing a component pool, constructing service components in the component pool, and constructing a scheduling system for triggering the service components to run, wherein the service components are used for running specified service tasks, and the component pool is used for storing the service components.
In the above solution, the building of the component pool, the building of the service component in the component pool, and the building of the scheduling system for triggering the service component to run include:
building a component container, configuring computing resources in the component container to convert the component container into the component pool;
receiving service codes and service parameters sent by a development end, and storing the service codes into the component pool to serve as service components in the component pool, wherein the service codes are computer codes used for running the service tasks;
and constructing a scheduling system, taking the storage position of the service component in the component pool as an environment variable of the scheduling system, enabling the scheduling system to trigger the service component to run through the environment variable, and inputting the service parameter into the scheduling system to serve as a trigger strategy for triggering the service component by the scheduling system.
In the above solution, after the building of the component pool, the building of the service component in the component pool, and the building of the scheduling system for triggering the service component to run, the method further includes:
receiving a newly added request sent by an initiating terminal, constructing a service component in the component pool according to service task information in the newly added request, and inputting configuration parameters in the newly added request task into the scheduling system, so that the scheduling system controls the service component corresponding to the newly added request according to the configuration parameters.
In the above scheme, the constructing a service component in the component pool according to the service task information in the new request, and entering the configuration parameter in the new request task into the scheduling system includes:
receiving a new request sent by an initiating terminal, extracting a service code in the new request, storing the service code into the component pool, and converting the service code into a service component of the component pool, wherein the service code is a computer code for operating the service task;
taking the storage position of the service component in the component pool as an environment variable of the scheduling system, so that the scheduling system can trigger the service component to run through the environment variable;
and extracting the configuration parameters in the newly added request, and inputting the configuration parameters into the dispatching system, so that the dispatching system can control the operation of the service assembly according to the configuration parameters.
In the above solution, the receiving a task request sent by a user side, and acquiring a service component corresponding to the task request from a preset component pool includes:
sending component visual information with at least one service name to a user side;
identifying an operation event of the user side on the visual component information, and generating information to be selected at least with the service name according to the operation event;
receiving the task request generated by the user side according to the information to be selected, wherein the task request is recorded with a service name in the information to be selected;
and acquiring the service components corresponding to the service names in the information to be selected from the component pool.
In the foregoing solution, the sequentially executing a plurality of target components according to the directed graph to obtain a task result includes:
obtaining at least one unexecuted node at the head of the directed graph, operating a target component corresponding to the unexecuted node to obtain an operation result, converting the target component corresponding to the unexecuted node into a legacy component according to the operation result, and converting the unexecuted node corresponding to the legacy component into an executed node;
setting a directed graph converted from an unexecuted node of a legacy component into an executed node as an updated graph, and sending the updated graph to the user side;
setting an operation result generated by the legacy component as a legacy result, identifying at least one unexecuted node which depends on the executed node in the directed graph, running a target component corresponding to the unexecuted node which depends on the executed node to obtain an operation result which depends on the legacy result, converting the unexecuted node which depends on the executed node into an executed node, and converting the target component corresponding to the unexecuted node which depends on the executed node into a legacy component until at least one unexecuted node which is positioned at the tail position in the directed graph is converted into an executed node, and setting an operation result generated by the legacy component corresponding to the executed node at the tail position as a task result;
after the sequentially executing the plurality of target components according to the directed graph to obtain task results, the method further comprises:
and uploading the task result to a block chain.
In order to achieve the above object, the present invention further provides a task scheduling apparatus, including:
the device comprises a component identification module, a task processing module and a task processing module, wherein the component identification module is used for receiving a task request sent by a user side, acquiring a service component corresponding to the task request from a preset component pool, and setting the service component as a target component, wherein at least one service component is stored in the component pool;
the arrangement recording module is used for receiving arrangement information sent by the user side, wherein the arrangement information is a directed graph with at least one unexecuted node used for representing the target components, and the directed graph is used for reflecting the dependency relationship between the target components;
and the task execution module is used for sequentially executing the target assemblies according to the directed graph to obtain task results and sending the task results to the user side.
In order to achieve the above object, the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor of the computer device implements the steps of the task scheduling method when executing the computer program.
In order to achieve the above object, the present invention further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the task orchestration scheduling method.
According to the task arrangement scheduling method, the task arrangement scheduling device, the computer equipment and the readable storage medium, the arrangement of the target component according to the intention of the user side is realized by receiving the arrangement information sent by the user side, so that the arranged target component can subsequently operate the target component according to the intention of the user, and finally the operation result required by the user is obtained; the method has the advantages that the operation result according with the intention of the user is obtained by sequentially executing the target components corresponding to the unexecuted nodes according to the dependency relationship among the target components in the directed graph, the selection of the target components and the personalized customization of the dependency relationship among the target components are realized, the calling and dependency requirements of different users on the service components are met, the integral task required by the user side is completed, the operation result corresponding to the integral task is obtained, the diversified requirements of the user side are met, and the application range is expanded.
Drawings
FIG. 1 is a flowchart of a task scheduling method according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating an environmental application of a task scheduling method according to a second embodiment of the task scheduling method of the present invention;
FIG. 3 is a flowchart of a specific method of a task scheduling method according to a second embodiment of the task scheduling method of the present invention;
FIG. 4 is a schematic diagram of program modules of a task scheduling apparatus according to a third embodiment of the present invention;
fig. 5 is a schematic diagram of a hardware structure of a computer device according to a fourth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides a task arranging and scheduling method, a task arranging and scheduling device, computer equipment and a readable storage medium, which are suitable for the technical field of computer operation and maintenance and provide a task arranging and scheduling method based on a component identification module, an arranging and recording module and a task execution module. The method comprises the steps of obtaining a service component corresponding to a task request from a preset component pool by receiving the task request sent by a user side, and setting the service component as a target component; receiving the arrangement information sent by the user side; and sequentially executing a plurality of target assemblies according to the directed graph to obtain a task result, and sending the task result to the user side.
The first embodiment is as follows:
referring to fig. 1, a task scheduling method of the present embodiment includes:
s103: receiving a task request sent by a user side, acquiring a service component corresponding to the task request from a preset component pool, and setting the service component as a target component, wherein at least one service component is stored in the component pool;
s104: receiving arrangement information sent by the user side, wherein the arrangement information is a directed graph with at least one unexecuted node used for representing the target components, and the directed graph is used for reflecting the dependency relationship between the target components;
s105: and sequentially executing a plurality of target assemblies according to the directed graph to obtain a task result, and sending the task result to the user side.
In an exemplary embodiment, the task request has at least one service name, the service name is a general description of a service task of the service component, the service component corresponding to the service name in the task request is obtained from the component pool, and the service component is set as the target component.
By receiving the scheduling information sent by the user side, that is: a directed graph having at least one unexecuted node; in this embodiment, the directed graph is a directed acyclic graph and is used to represent the dependency relationship between the target components that need to be called when the user side needs to complete the whole task of the user side, where the directed acyclic graph has a start node and an end node and is used to represent a start point and an end point when the whole task is executed.
And sequentially executing a plurality of target assemblies according to the dependency relationship among the target assemblies in the directed graph to obtain an operation result according with the intention of a user, realizing the selection of the target assemblies and the personalized customization of the dependency relationship among the target assemblies, meeting the calling and dependency requirements of different users on service assemblies, completing the integral task required by a user side and obtaining the operation result corresponding to the integral task.
In summary, the user side can obtain the required service components by sending the task request, arrange the service components by sending the arrangement information to construct the dependency relationship among the service components, and execute the corresponding service components according to the directed graph in the arrangement information to finally obtain the task result required by the user side, so that the diversified requirements of the user side are met, and the application range is expanded.
Example two:
the embodiment is a specific application scenario of the first embodiment, and the method provided by the present invention can be more clearly and specifically explained through the embodiment.
Next, the method provided in this embodiment is specifically described by taking an example that, in a server running a task scheduling method, a target component corresponding to a task request is acquired from a component pool, and target components corresponding to non-executed nodes are sequentially executed according to a directed graph to obtain a task result. It should be noted that the present embodiment is only exemplary, and does not limit the protection scope of the embodiments of the present invention.
Fig. 2 schematically illustrates an environment application diagram of a task orchestration scheduling method according to a second embodiment of the present application.
In an exemplary embodiment, the server 2 where the task scheduling method is located is connected to the development end 3 and the user end 4 through a network; the server 2 may provide services through one or more networks, which may include various network devices, such as routers, switches, multiplexers, hubs, modems, bridges, repeaters, firewalls, proxy devices, and/or the like. The network may include physical links, such as coaxial cable links, twisted pair cable links, fiber optic links, combinations thereof, and/or the like. The network may include wireless links, such as cellular links, satellite links, Wi-Fi links, and/or the like; the development end 3 and the user end 4 can be computer equipment such as a smart phone, a tablet computer, a notebook computer and a desktop computer respectively.
Fig. 3 is a flowchart of a specific method of a task scheduling method according to an embodiment of the present invention, where the method specifically includes steps S201 to S205.
S201: the method comprises the steps of constructing a component pool, constructing service components in the component pool, and constructing a scheduling system for triggering the service components to run, wherein the service components are used for running specified service tasks, and the component pool is used for storing the service components.
In order to ensure that a user side can directly call a required service component and provide convenience for a user to realize a service task, the method provides the service component which can be directly called for the user by constructing a component pool and constructing the service component in the component pool, is used for realizing the service task for the user, and simultaneously provides a function of arranging the logic sequencing of a plurality of service components for the user by constructing a scheduling system for triggering the service component to run, thereby realizing the complex workflow of a plurality of service tasks.
In a preferred embodiment, the building a component pool, building service components in the component pool, and building a scheduling system for triggering the service components to run includes:
s11: building a component container, and configuring computing resources in the component container to convert the component container into the component pool.
In this step, the computing resources generally refer to CPU resources, memory resources, hard disk resources, and network resources required for the running of the computer program.
S12: receiving service codes and service parameters sent by a development end, and storing the service codes into the component pool to serve as service components in the component pool, wherein the service codes are computer codes used for running the service tasks;
s13: and constructing a scheduling system, taking the storage position of the service component in the component pool as an environment variable of the scheduling system, enabling the scheduling system to trigger the service component to run through the environment variable, and inputting the service parameter into the scheduling system to serve as a trigger strategy for triggering the service component by the scheduling system.
In this step, ZooKeeper is used as the scheduling system, where ZooKeeper is a distributed configuration service, a synchronization service, and a naming registry that provide open sources for large-scale distributed computing. The architecture of ZooKeeper achieves high availability through redundant services. Thus, if there is no response for the first time, the user end can query another ZooKeeper host. ZooKeeper nodes store their data in a hierarchical namespace, much like a file system or prefix tree structure. The client can read and write at the node, thus having shared configuration services in this way. The updates are fully ordered.
Further, the service parameters comprise a load balancing policy and a retry policy; recording the load balancing strategy into the dispatching system, so that the dispatching system controls the number of computing resources used by the service assembly according to the load balancing strategy; and recording the retry strategy into the scheduling system, so that when the service component has an operation error or operation failure, the scheduling system can trigger the service component to operate again according to the retry strategy. The load balancing policy records a resource upper limit of a computing resource provided for the operation of the service component, and the retry policy records a number upper limit of the number of times of the re-operation of the service component.
S202: receiving a newly added request sent by an initiating terminal, constructing a service component in the component pool according to service task information in the newly added request, and inputting configuration parameters in the newly added request task into the scheduling system, so that the scheduling system controls the service component corresponding to the newly added request according to the configuration parameters.
In order to realize further optimization of the component pool and enable the component pool to continuously provide more service components so as to realize more diversified service tasks, the step is that the service components are built in the component pool according to the service task information in the newly added request, and the configuration parameters in the newly added request task are input into the dispatching system, so that the dispatching system realizes the service components continuously providing new service tasks for the component pool in a mode of controlling the service components corresponding to the newly added request according to the configuration parameters so as to continuously perfect the functions of the component pool.
In a preferred embodiment, the constructing a service component in the component pool according to the service task information in the new request, and entering the configuration parameter in the new request task into the scheduling system include:
s21: receiving a new request sent by an initiating terminal, extracting a service code in the new request, storing the service code into the component pool, and converting the service code into a service component of the component pool, wherein the service code is a computer code for operating the service task;
s22: taking the storage position of the service component in the component pool as an environment variable of the scheduling system, so that the scheduling system can trigger the service component to run through the environment variable;
s23: and extracting configuration parameters in the newly added request, and inputting the configuration parameters into the scheduling system, so that the scheduling system can control the operation of the service component according to the configuration parameters.
In this step, the configuration parameters include a load balancing policy and a retry policy; recording the load balancing strategy into the dispatching system, so that the dispatching system controls the number of computing resources used by the service assembly according to the load balancing strategy; and recording the retry strategy into the scheduling system, so that when the service component has an operation error or operation failure, the scheduling system can trigger the service component to operate again according to the retry strategy. The load balancing policy records a resource upper limit of a computing resource provided for the operation of the service component, and the retry policy records a number upper limit of the number of times of the re-operation of the service component.
S203: receiving a task request sent by a user side, acquiring a service component corresponding to the task request from a preset component pool, and setting the service component as a target component, wherein at least one service component is stored in the component pool.
In this step, the task request has at least one service name, where the service name is a general description of a service task of the service component, and the service component corresponding to the service name in the task request is obtained from the component pool and is set as the target component.
In a preferred embodiment, the obtaining, from a preset component pool, a service component corresponding to a task request by a task request sent by the receiving user side includes:
s31: and sending the visual information of the component with at least one service name to the user terminal.
In this step, the component visualization information is a component page having at least one service name characterizing a service component, the service name being a general description of a service task corresponding to the service component.
S32: identifying an operation event of the user side on the visual component information, and generating information to be selected at least with the service name according to the operation event;
in this step, the operation event refers to a js event performed by the user side on the component visualization information, for example: click operation, drag operation, and the like. The js event refers to a JavaScript event, and is an event which is used for selecting a service name set in a component page through operations such as clicking, dragging and the like to trigger and select a JavaScript function of a service component corresponding to the service name. And identifying and summarizing the service name selected by the user side according to the operation event to form the information to be selected.
S33: and receiving the task request generated by the user side according to the information to be selected, wherein the task request is recorded with a service name in the information to be selected.
In this step, the user side converts the information to be selected, which is previously formed by selecting the service name, into the task request by clicking the buttons of "confirm", "submit", and the like, so as to ensure that the task request sent by the user side is confirmed by the user side, thereby ensuring the reliability of the content of the task request.
S34: and acquiring the service components corresponding to the service names in the information to be selected from the component pool.
S204: and receiving the arrangement information sent by the user side, wherein the arrangement information is a directed graph with at least one unexecuted node used for representing the target components, and the directed graph is used for reflecting the dependency relationship between the target components.
In order to realize the arrangement of the target component according to the intention of the user side, so that the arranged target component can subsequently operate the target component according to the intention of the user, and finally obtain the operation result required by the user, the step receives the arrangement information sent by the user side, namely: a directed graph having at least one unexecuted node; wherein the directed graph is used to reflect the dependency relationship between the target components.
In this embodiment, the directed graph is a directed acyclic graph and is used to represent a dependency relationship between target components that need to be called when a user needs to complete an overall task of the user, where the directed acyclic graph has a start node and an end node and is used to represent a start point and an end point when the overall task is executed.
S205: and sequentially executing a plurality of target assemblies according to the directed graph to obtain a task result, and sending the task result to the user side.
In order to complete the integral task required by the user side and obtain the operation result corresponding to the integral task, the step is implemented in a mode of sequentially executing a plurality of target components according to the dependency relationship among the target components in the digraph to obtain the operation result according with the intention of the user, so that the selection of the target components and the personalized customization of the dependency relationship among the target components are realized, and the calling and dependency requirements of different users on service components are met.
In a preferred embodiment, said sequentially executing a plurality of said target components according to said directed graph to obtain a task result includes:
s51: the method comprises the steps of obtaining at least one unexecuted node located at the head of the directed graph, operating a target component corresponding to the unexecuted node to obtain an operation result, converting the target component corresponding to the unexecuted node into a legacy component according to the operation result, and converting the unexecuted node corresponding to the legacy component into an executed node.
In this step, the directed graph is a directed acyclic graph and is used to characterize a logic sequence of a user calling a target component and a dependency relationship between the target components.
Specifically, the obtaining at least one unexecuted node located at the head of the directed graph, running a target component corresponding to the unexecuted node to obtain a running result, converting the target component corresponding to the unexecuted node into a legacy component according to the running result, and converting the unexecuted node corresponding to the legacy component into an executed node includes:
s511: setting at least one unexecuted node at the head in the directed graph as a timing node, setting at least one target component corresponding to the timing node as a timing component, and triggering the timing component to run;
in the step, the initial position of the directed graph is obtained by identifying the initial node in the directed graph; and obtaining at least one unexecuted node which is positioned at the head in the directed graph and is set as a timing node by obtaining the unexecuted node which depends on the starting node.
S512: judging whether the timing component generates an operation result within a preset timing time threshold value or not; if the operation result of successful operation is generated, converting the timing component into a legacy component according to the operation result, and converting a timing node corresponding to the legacy component into an executed node; if the operation result of the operation failure is generated, a task failure notice is sent to the scheduling system; and if the operation result is not generated, the service component is re-triggered to operate through the scheduling system according to the retry strategy.
In this step, a timing time threshold is set for the timing component to determine whether the timing component can complete the service task within a specified time, so as to ensure timely identification of the overtime timing component.
Further, the node determines whether the timing component generates an operation result within a preset timing time threshold; if the operation result of successful operation is generated, converting the timing component into a legacy component according to the operation result, and converting a timing node corresponding to the legacy component into an executed node; if the operation result of the operation failure is generated, a task failure notice is sent to the scheduling system; if the operation result is not generated, the service component is retriggered to operate according to the retry strategy by the scheduling system, and the method comprises the following steps:
s5121: judging whether the timing component generates an operation result within a preset timing time threshold value or not;
s5122: if the running result is generated, extracting a task label in the running result and identifying the content of the task label; if the content is successful, converting the timing component into a legacy component according to the operation result, and converting a timing node corresponding to the legacy component into an executed node; if the content is a task failure, a task failure notice is sent to the scheduling system; wherein the task tag is a general description of the content of the operation result.
In the step, the property of the operation result is determined by extracting the task label in the operation result and identifying the content of the task label; if the operation result is that the task is successful, executing a next target component according to the directed graph; and if the operation result is that the task fails, the task failure notification needs to be sent to the scheduling system, so that the user side can know the failed target component through the scheduling system.
S5123: if the operation result is not generated, calling a retry strategy of the scheduling system to re-trigger the timing component to operate, and judging whether the timing component generates the operation result within the preset time again; if the running result is generated again, extracting the task label in the running result and identifying the content of the task label; if the content is successful, converting the timing component into a legacy component according to the operation result, and converting a timing node corresponding to the legacy component into an executed node; if the content is a task failure, a task failure notice is sent to the scheduling system; if the operation result is not generated again, sending a component exception notification with the component name corresponding to the timing component to the scheduling system.
In this step, the running result generated by the timing component is further ensured by re-triggering the running of the timing component, so as to avoid the situation that the completion degree of the service task corresponding to the task request is low because the timing component cannot generate the running result in time due to temporary information blocking and the timing component is directly judged to be abnormal.
At the same time, the anomaly of the timing component is identified again by means of said timing time threshold, so as to lock the cause of the anomaly, namely: if the operation of the timing component is triggered again to generate an operation result, judging that the reason of not generating the operation result at the previous time is only the temporary blocking of the timing component; if the timing component is retriggered to run and a running result is not generated, the timing component can be judged to be abnormal, and then a component abnormal notice is sent to the scheduling system.
And locking whether the service task in the timing component is abnormal or not by identifying the task tag in the running result, if the task tag content in the running result is a task failure, determining that the service task of the timing component is abnormal, and sending a task failure notice to the scheduling system.
S52: and setting the directed graph converted from the unexecuted node of the legacy component into the executed node as an updated graph, and sending the updated graph to the user side.
In order to ensure that the user side can monitor which node of the directed graph the whole task of the user side has been executed to in real time, the step sets the non-executed node of the legacy component into the directed graph of the executed node as an updated graph, and sends the updated graph to the user side, so as to ensure that the user side controls the progress of the whole task in real time.
S53: setting an operation result generated by the legacy component as a legacy result, identifying at least one unexecuted node which depends on the executed node in the directed graph, running a target component corresponding to the unexecuted node which depends on the executed node to obtain an operation result which depends on the legacy result, converting the unexecuted node which depends on the executed node into an executed node, and converting the target component corresponding to the unexecuted node which depends on the executed node into a legacy component until at least one unexecuted node which is positioned at the tail position in the directed graph is converted into an executed node, and setting an operation result generated by the legacy component corresponding to the executed node at the tail position as a task result.
In this step, monitoring whether an unexecuted node in the directed graph is converted into an executed node or not in a log trailing manner, if so, setting an operation result generated by the legacy component as a legacy result, identifying at least one unexecuted node in the directed graph that depends on the executed node, and running a target component corresponding to the unexecuted node that depends on the executed node to obtain an operation result that depends on the legacy result, wherein the log trailing is based on a message middleware to monitor whether the unexecuted node in the directed graph is converted into a computer process of the executed node or not. In this embodiment, the log tailing is implemented by monitoring whether an upstream unexecuted node is converted into an executed node, obtaining legacy information from a legacy component corresponding to the executed node, and forwarding the legacy information to a target component corresponding to the current position of the unexecuted node. Therefore, the legacy result generated by the upstream legacy component does not need to be manually input into the current target component, and the overall operating efficiency of each target component is improved.
It should be noted that canal is a middleware for subscribing and consuming incremental data based on database incremental log analysis developed by java, and at present, canal mainly supports binlog analysis of MySQL, and uses canal client to process the obtained relevant data after analysis is completed. Mysql binlog is a binary log file that records data updates or potential updates to Mysql (e.g., data that a DELETE statement performs a deletion without actually meeting a condition, which is the binlog relied upon in Mysql master-slave replication.
Specifically, the setting an operation result generated by the legacy component as a legacy result, identifying at least one unexecuted node that depends on the executed node in the directed graph, running a target component corresponding to the unexecuted node that depends on the executed node to obtain an operation result that depends on the legacy result, and according to the operation result that depends on the legacy result, converting the target component corresponding to the unexecuted node that depends on the executed node into a legacy component, and converting the unexecuted node corresponding to the legacy component into an executed node until at least one unexecuted node located at the last position in the directed graph is converted into an executed node includes:
s531: determining whether an executed node corresponding to the legacy component is at a last bit of the directed graph;
s532: if so, setting an operation result generated by the legacy component as a task result;
s533: if not, setting the operation result generated by the legacy component as a legacy result, setting at least one unexecuted node which depends on the executed node in the directed graph as a timing node, and setting a target component which corresponds to the timing node which depends on the executed node as a timing component which depends on the legacy component.
In this step, if the executed node of the current legacy component is already associated with the termination node of the directed graph, it indicates that the executed node of the legacy component is at the last bit of the directed graph.
S534: sending the legacy result to the timing component depending on the legacy component, and judging whether the timing component receives the legacy results sent by all the dependent legacy components within a preset legacy time threshold;
s535: if yes, triggering a timing component depending on the legacy component to run;
s536: if not, judging whether the legacy result received by the timing component can cover a preset legacy label or not; triggering a timing component dependent on the legacy component to run if the legacy tag can be overridden; suspending the timing component dependent on the legacy component from running if the legacy tag cannot be overwritten.
In this step, an Optional item (Optional) is set in each target component to serve as the legacy label, wherein the legacy label is used for describing a running result sent by the target component on which the target component depends;
when the target component is converted into a timing component, the timing component is used for receiving a legacy result sent by a legacy component which the timing component depends on; judging whether the timing component receives legacy results sent by all dependent legacy components within the legacy time threshold; if yes, the timing component is directly triggered to operate; if not, extracting selectable items in the timing assembly, comparing the legacy results received by the timing assembly with the selectable items one by one, and judging whether the received legacy results can cover the selectable items;
if the task request is received, the timing component is directly triggered to run so as to avoid the problem that the whole task corresponding to the task request cannot be carried out due to the fact that a certain unimportant target component is abnormal or fails in the task, and the robustness of multitask processing is guaranteed.
If not, the timing component is temporarily stopped from being triggered to run so as to avoid the problem that the timing component runs forcibly due to lack of necessary left results and finally generates wrong task results, thereby ensuring the reliability of multi-task processing.
S537: after the timing component depending on the legacy component is triggered to run, judging whether the timing component depending on the legacy component generates a running result within a preset timing time threshold value; if an operation result of successful operation is generated, converting the timing assembly depending on the legacy assembly into a legacy assembly according to the operation result, and converting a timing node corresponding to the legacy assembly into an executed node; if the operation result of the operation failure is generated, a task failure notice is sent to the scheduling system; and if the operation result is not generated, the scheduling system retries to trigger the timing component depending on the legacy component to operate according to a retry strategy.
Preferably, after the sequentially executing the target components according to the directed graph to obtain task results, the method further includes:
and uploading the task result to a block chain.
It should be noted that, the corresponding digest information is obtained based on the task result, and specifically, the digest information is obtained by performing hash processing on the task result, for example, by using the sha256s algorithm. Uploading summary information to the blockchain can ensure the safety and the fair transparency of the user. The user equipment can download the summary information from the blockchain so as to verify whether the task result is tampered. The blockchain referred to in this example is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, encryption algorithm, and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
Example three:
referring to fig. 4, a task scheduling apparatus 1 of the present embodiment includes:
a component identification module 13, configured to receive a task request sent by a user, obtain a service component corresponding to the task request from a preset component pool, and set the service component as a target component, where at least one service component is stored in the component pool;
the arrangement and entry module 14 is configured to receive arrangement information sent by the user side, where the arrangement information is a directed graph having at least one unexecuted node for representing the target components, and the directed graph is used for reflecting a dependency relationship between the target components;
and the task execution module 15 is configured to execute the multiple target assemblies in sequence according to the directed graph to obtain a task result, and send the task result to the user side.
Optionally, the task scheduling apparatus 1 further includes:
the creating module 11 is configured to construct a component pool, construct a service component in the component pool, and construct a scheduling system for triggering the service component to run, where the service component is used to run a specified service task, and the component pool is used to store the service component.
Optionally, the creating module 11 further includes:
a component pool unit 111, configured to construct a component container, and configure a computing resource in the component container to convert the component container into the component pool;
a component creating unit 112, configured to receive service codes and service parameters sent by a development end, and store the service codes into the component pool as service components in the component pool, where the service codes are computer codes used for running the service tasks;
the scheduling creating unit 113 is configured to construct a scheduling system, use the storage location of the service component in the component pool as an environment variable of the scheduling system, enable the scheduling system to trigger the service component to run through the environment variable, and enter the service parameter into the scheduling system to serve as a trigger policy for the scheduling system to trigger the service component.
Optionally, the task scheduling apparatus 1 further includes:
the component adding unit 12 is configured to receive a new request sent by a development end, construct a service component in the component pool according to service task information in the new request, and enter configuration parameters in the new request task into the scheduling system, so that the scheduling system controls the service component corresponding to the new request according to the configuration parameters.
Optionally, the component adding unit 12 further includes:
a code conversion unit 121, configured to receive a new request sent by a development end, extract a service code in the new request, store the service code in the component pool, and convert the service code into a service component in the component pool, where the service code is computer code for running the service task;
a variable configuration unit 122, configured to use the storage location of the service component in the component pool as an environment variable of the scheduling system, so that the scheduling system can trigger the service component to run through the environment variable;
a parameter configuration unit 123, configured to extract configuration parameters in the new addition request, and enter the configuration parameters into the scheduling system, so that the scheduling system can control the operation of the service component according to the configuration parameters.
Optionally, the component recognition module 13 further includes:
a visualization unit 131, configured to send component visualization information with at least one service name to a user side;
an operation identification unit 132, configured to identify an operation event of the user side on the component visualization information, and generate to-be-selected information with at least the service name according to the operation event;
the task input unit 133 is configured to receive the task request generated by the user side according to the information to be selected, where a service name in the information to be selected is recorded in the task request;
a component obtaining unit 134, configured to obtain a service component corresponding to the service name in the information to be selected from the component pool.
Optionally, the task execution module 15 further includes:
an execution identifying unit 151, configured to obtain at least one unexecuted node located at a head of the directed graph, run a target component corresponding to the unexecuted node to obtain a running result, convert the target component corresponding to the unexecuted node into a legacy component according to the running result, and convert the unexecuted node corresponding to the legacy component into an executed node;
an update output unit 152, configured to convert an unexecuted node of a legacy component into a directed graph of an executed node, and set the directed graph as an update graph, and send the update graph to the user side;
an arranging and executing unit 153, configured to set an operation result generated by the legacy component as a legacy result, identify at least one unexecuted node in the directed graph that depends on the executed node, run a target component corresponding to the unexecuted node that depends on the executed node to obtain an operation result that depends on the legacy result, convert the unexecuted node that depends on the executed node into an executed node, and convert the target component corresponding to the unexecuted node that depends on the executed node into a legacy component until at least one unexecuted node located at the last position in the directed graph is converted into an executed node, and set an operation result generated by the legacy component corresponding to the executed node located at the last position as a task result.
The technical scheme is applied to the field of process optimization of computer operation and maintenance, and the technical effect of optimizing a business process is achieved by receiving a task request sent by a user side, acquiring a service component corresponding to the task request from a preset component pool, setting the service component as a target component, receiving arrangement information sent by the user side, and sequentially executing a plurality of target components according to the directed graph to obtain a task result.
Example four:
in order to achieve the above object, the present invention further provides a computer device 5, components of the task scheduling device according to the third embodiment may be distributed in different computer devices, and the computer device 5 may be a smart phone, a tablet computer, a notebook computer, a desktop computer, a rack-mounted server, a blade server, a tower server, or a rack-mounted server (including an independent server or a server cluster formed by a plurality of application servers) for executing programs, and the like. The computer device of the embodiment at least includes but is not limited to: a memory 51, a processor 52, which may be communicatively coupled to each other via a system bus, as shown in FIG. 5. It should be noted that fig. 5 only shows a computer device with components, but it should be understood that not all of the shown components are required to be implemented, and more or fewer components may be implemented instead.
In this embodiment, the memory 51 (i.e., a readable storage medium) includes a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. In some embodiments, the memory 51 may be an internal storage unit of the computer device, such as a hard disk or a memory of the computer device. In other embodiments, the memory 51 may be an external storage device of a computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, provided on the computer device. Of course, the memory 51 may also include both internal and external storage devices of the computer device. In this embodiment, the memory 51 is generally used for storing an operating system and various application software installed in the computer device, for example, the program code of the task scheduling device in the third embodiment. Further, the memory 51 may also be used to temporarily store various types of data that have been output or are to be output.
Processor 52 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 52 is typically used to control the overall operation of the computer device. In this embodiment, the processor 52 is configured to run the program code stored in the memory 51 or process data, for example, run the task scheduling apparatus, so as to implement the task scheduling method of the first embodiment and the second embodiment.
Example five:
to achieve the above objects, the present invention also provides a computer readable storage medium, such as a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, a server, an App application store, etc., on which a computer program is stored, which when executed by a processor 52, implements corresponding functions. The computer-readable storage medium of this embodiment is used for storing a computer program for implementing the task scheduling method, and when being executed by the processor 52, implements the task scheduling method of the first embodiment and the second embodiment.
The above-mentioned serial numbers of the embodiments of the present invention are only for description, and do not represent the advantages and disadvantages of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A task scheduling method is characterized by comprising the following steps:
receiving a task request sent by a user side, acquiring a service component corresponding to the task request from a preset component pool, and setting the service component as a target component, wherein at least one service component is stored in the component pool;
receiving the arrangement information sent by the user side, wherein the arrangement information is a directed graph with at least one unexecuted node used for representing the target components, and the directed graph is used for reflecting the dependency relationship between the target components;
and sequentially executing a plurality of target assemblies according to the directed graph to obtain a task result, and sending the task result to the user side.
2. The task scheduling method according to claim 1, wherein before receiving the task request sent by the user side, the method further comprises:
the method comprises the steps of constructing a component pool, constructing service components in the component pool, and constructing a scheduling system for triggering the service components to run, wherein the service components are used for running specified service tasks, and the component pool is used for storing the service components.
3. The task orchestration scheduling method according to claim 2, wherein the building of the component pool and the building of the service components in the component pool, and the building of the scheduling system for triggering the service components to run, comprises:
building a component container, and configuring computing resources in the component container to convert the component container into the component pool;
receiving service codes and service parameters sent by a development end, and storing the service codes into the component pool to serve as service components in the component pool, wherein the service codes are computer codes used for running the service tasks;
and constructing a scheduling system, taking the storage position of the service component in the component pool as an environment variable of the scheduling system, enabling the scheduling system to trigger the service component to run through the environment variable, and inputting the service parameter into the scheduling system to serve as a trigger strategy for triggering the service component by the scheduling system.
4. The method according to claim 2, wherein after the building of the component pool and the building of the service components in the component pool and the building of the scheduling system for triggering the service components to run, the method further comprises:
receiving a newly added request sent by an initiating terminal, constructing a service component in the component pool according to service task information in the newly added request, and inputting configuration parameters in the newly added request task into the scheduling system, so that the scheduling system controls the service component corresponding to the newly added request according to the configuration parameters.
5. The task orchestration scheduling method according to claim 4, wherein the constructing a service component in the component pool according to the service task information in the new request, and entering the configuration parameter in the new request task into the scheduling system, comprises:
receiving a new request sent by an initiating terminal, extracting a service code in the new request, storing the service code into the component pool, and converting the service code into a service component of the component pool, wherein the service code is a computer code for operating the service task;
taking the storage position of the service component in the component pool as an environment variable of the scheduling system, so that the scheduling system can trigger the service component to run through the environment variable;
and extracting the configuration parameters in the newly added request, and inputting the configuration parameters into the dispatching system, so that the dispatching system can control the operation of the service assembly according to the configuration parameters.
6. The task scheduling method according to claim 1, wherein the acquiring a service component corresponding to the task request from a preset component pool by the task request sent from the receiving client comprises:
sending component visual information with at least one service name to a user side;
identifying an operation event of the user side on the component visualization information, and generating information to be selected at least with the service name according to the operation event;
receiving the task request generated by the user side according to the information to be selected, wherein the task request is recorded with a service name in the information to be selected;
and acquiring the service components corresponding to the service names in the information to be selected from the component pool.
7. The task orchestration scheduling method according to claim 1, wherein the sequentially executing the target components according to the directed graph to obtain task results comprises:
obtaining at least one unexecuted node at the head of the directed graph, operating a target component corresponding to the unexecuted node to obtain an operation result, converting the target component corresponding to the unexecuted node into a legacy component according to the operation result, and converting the unexecuted node corresponding to the legacy component into an executed node;
setting a directed graph converted from an unexecuted node of a legacy component into an executed node as an updated graph, and sending the updated graph to the user side;
setting an operation result generated by the legacy component as a legacy result, identifying at least one unexecuted node which depends on the executed node in the directed graph, running a target component corresponding to the unexecuted node which depends on the executed node to obtain an operation result which depends on the legacy result, converting the unexecuted node which depends on the executed node into an executed node, and converting the target component corresponding to the unexecuted node which depends on the executed node into a legacy component until at least one unexecuted node which is positioned at the tail position in the directed graph is converted into an executed node, and setting an operation result generated by the legacy component corresponding to the executed node at the tail position as a task result;
after the sequentially executing the plurality of target components according to the directed graph to obtain task results, the method further comprises:
and uploading the task result to a block chain.
8. A task scheduling apparatus, comprising:
the device comprises a component identification module, a task processing module and a task processing module, wherein the component identification module is used for receiving a task request sent by a user side, acquiring a service component corresponding to the task request from a preset component pool, and setting the service component as a target component, wherein at least one service component is stored in the component pool;
the arrangement recording module is used for receiving arrangement information sent by the user side, wherein the arrangement information is a directed graph with at least one unexecuted node used for representing the target components, and the directed graph is used for reflecting the dependency relationship between the target components;
and the task execution module is used for sequentially executing the target assemblies according to the directed graph to obtain task results and sending the task results to the user side.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the task orchestration scheduling method according to any one of claims 1 to 7 are performed by the processor of the computer device when the computer program is executed.
10. A computer-readable storage medium, on which a computer program is stored, wherein the computer program stored in the computer-readable storage medium, when being executed by a processor, implements the steps of the task orchestration scheduling method according to any one of claims 1 to 7.
CN202210369286.5A 2022-04-08 2022-04-08 Task scheduling method and device, computer equipment and readable storage medium Pending CN114911590A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210369286.5A CN114911590A (en) 2022-04-08 2022-04-08 Task scheduling method and device, computer equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210369286.5A CN114911590A (en) 2022-04-08 2022-04-08 Task scheduling method and device, computer equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN114911590A true CN114911590A (en) 2022-08-16

Family

ID=82762651

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210369286.5A Pending CN114911590A (en) 2022-04-08 2022-04-08 Task scheduling method and device, computer equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN114911590A (en)

Similar Documents

Publication Publication Date Title
CN109862101B (en) Cross-platform application starting method and device, computer equipment and storage medium
CN112463144B (en) Distributed storage command line service method, system, terminal and storage medium
CN106469068B (en) Application program deployment method and system
US11314524B2 (en) Method, apparatus, and computer program product for managing service container
CN111736872A (en) Gray scale release upgrading method and device, computer system and readable storage medium
CN111897520A (en) Front-end project framework construction method and device, computer equipment and storage medium
CN108255994A (en) A kind of database version management method based on database snapshot
CN110162344B (en) Isolation current limiting method and device, computer equipment and readable storage medium
WO2023155819A1 (en) Application deployment method and system
CN111338656A (en) Method and device for installing software package to target host and computer equipment
CN111884834A (en) Zookeeper-based distributed system upgrading method and system and computer equipment
CN112702195A (en) Gateway configuration method, electronic device and computer readable storage medium
CN111586022A (en) Firewall opening verification method, electronic device, computer equipment and storage medium
CN111680104A (en) Data synchronization method and device, computer equipment and readable storage medium
CN113259397B (en) Method, device and equipment for executing plan and readable storage medium
CN112130889A (en) Resource management method and device, storage medium and electronic device
CN114443294B (en) Big data service component deployment method, system, terminal and storage medium
CN114911590A (en) Task scheduling method and device, computer equipment and readable storage medium
CN112732265A (en) Data processing method and related device
CN112564979B (en) Execution method and device of construction task, computer equipment and storage medium
US11010154B2 (en) System and method for implementing complex patching micro service automation
CN113867778A (en) Method and device for generating mirror image file, electronic equipment and storage medium
CN111984275A (en) System deployment method, system, terminal and storage medium based on CPU architecture type
CN111586135A (en) Cloud deployment micro-service application system and data transmission method, device and equipment thereof
CN112541756B (en) Block chain contract upgrading method and device, computer equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination