CN116302398A - Workflow arrangement method, device, equipment and medium based on cloud protogenesis - Google Patents

Workflow arrangement method, device, equipment and medium based on cloud protogenesis Download PDF

Info

Publication number
CN116302398A
CN116302398A CN202310119191.2A CN202310119191A CN116302398A CN 116302398 A CN116302398 A CN 116302398A CN 202310119191 A CN202310119191 A CN 202310119191A CN 116302398 A CN116302398 A CN 116302398A
Authority
CN
China
Prior art keywords
work
workflow
remote host
task
different
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310119191.2A
Other languages
Chinese (zh)
Inventor
张家华
吴典秋
姚夏冰
王刚峰
韩伯文
谢育政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Qianhai Huanrong Lianyi Information Technology Service Co Ltd
Original Assignee
Shenzhen Qianhai Huanrong Lianyi Information Technology Service Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Qianhai Huanrong Lianyi Information Technology Service Co Ltd filed Critical Shenzhen Qianhai Huanrong Lianyi Information Technology Service Co Ltd
Priority to CN202310119191.2A priority Critical patent/CN116302398A/en
Publication of CN116302398A publication Critical patent/CN116302398A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44521Dynamic linking or loading; Link editing at or after load time, e.g. Java class loading
    • G06F9/44526Plug-ins; Add-ons
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses a work arrangement method, device, equipment and medium based on cloud protogenesis, and relates to the technical field of system development. The method comprises the following steps: embedding a connection plug-in for data interaction in the operation and maintenance tool into a workflow engine; after the workflow service receives the task arrangement request, the workflow engine is driven to distribute different tasks to different working nodes according to execution logic of different remote hosts in the workflow service; and the working node is connected with a remote host by utilizing the connection plug-in to control the remote host to complete corresponding operation according to the executable file of the working task. Through the technical scheme, the cloud primary framework body can be ensured, the remote host can be directly controlled to execute user tasks, and powerful work arrangement capacity and file information transfer among all workflows can be supported.

Description

Workflow arrangement method, device, equipment and medium based on cloud protogenesis
Technical Field
The application relates to the technical field of system development, in particular to a work arrangement method, device, equipment and medium based on cloud protogenesis.
Background
A major problem that workflow (workflow) can solve is that to achieve a certain business objective, a computer is used to automatically transfer documents, information or tasks between multiple participants according to a certain predetermined rule. Existing workflows focus mainly on the control of the whole workflow and do not care about what is the specific work content, and in general, all work can be performed completely on a single host node. However, when each job of encountering a workflow needs to be executed on a different host, it cannot be realized only according to the original architecture design.
For example, compiling and packaging the whole project at one host or multiple hosts requires multiple machines for deployment, and service deployment between the machines has a dependency. In such a complex scenario, if only a workflow is used to schedule a flow, one host per operation cannot be realized in architecture, and if only an active playlist is used to command multiple machines, some complex flow operations, such as parallel, relying on multiple pre-steps, etc., have problems that implementation is difficult or impossible, and the requirement on the expertise of the staff is high.
It can be seen that the prior art has certain limitations in terms of work scheduling capability, data information transfer between jobs, and task execution by a remote operation host.
Disclosure of Invention
In view of this, the present application provides a cloud native-based work arrangement method, apparatus, device, and medium, and aims to solve the technical problem that the existing architecture design cannot realize the scenario that each work facing a workflow needs to be executed on a different host.
According to one aspect of the present application, there is provided a cloud native-based work orchestration method, the method comprising:
embedding a connection plug-in for data interaction in the operation and maintenance tool into a workflow engine;
after the workflow service receives the task arrangement request, the workflow engine is driven to distribute different tasks to different working nodes according to execution logic of different remote hosts in the workflow service;
and the working node is connected with a remote host by utilizing the connection plug-in to control the remote host to complete corresponding operation according to the executable file of the working task.
According to yet another aspect of the present application, there is provided a cloud native-based work orchestration device, comprising:
the embedding module is used for embedding a connection plug-in for data interaction in the operation and maintenance tool into the workflow engine;
the driving module is used for driving the workflow engine to distribute different work tasks to different work nodes according to execution logic of different remote hosts in the workflow service after the workflow service receives the work task arrangement request;
and the sending module is used for connecting the working node with a remote host by utilizing the connection plug-in to control the remote host to finish corresponding operation according to the executable file of the working task.
According to yet another aspect of the present application, there is provided a computer storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described cloud-proto-based work orchestration method.
According to still another aspect of the present application, there is provided a computer device including a storage medium, a processor, and a computer program stored on the storage medium and executable on the processor, the processor implementing the cloud native-based work orchestration method described above when executing the program.
By means of the technical scheme, compared with the prior art, the cloud native-based work arrangement method, device, equipment and medium provided by the application are characterized in that a connection plug-in for data interaction in an operation and maintenance tool is embedded into a workflow engine, when a workflow service receives a work task arrangement request, the workflow engine is driven to distribute different work tasks into different work nodes according to execution logic of different remote hosts in the workflow service, and the work nodes are connected with the remote hosts by using the connection plug-in so as to control the remote hosts to finish corresponding operations according to executable files of the work tasks. Therefore, the cloud native architecture which can be connected with the remote host is realized based on the function of the operation and maintenance tool connection plug-in, so that after the workflow service receives a work task arrangement request, the workflow engine can be driven to distribute different work tasks to different work nodes according to execution logic of different remote hosts in the workflow service, the work nodes can be connected with the remote host by using the cloud native embedded connection plug-in to control the remote host to finish corresponding operations according to executable files of the work tasks, the volume of the cloud native architecture is ensured, the remote host is directly controlled to execute user tasks, and meanwhile, strong work arrangement capacity and file information transfer among various workflows can be supported.
The foregoing description is only an overview of the technical solutions of the present application, and may be implemented according to the content of the specification in order to make the technical means of the present application more clearly understood, and in order to make the above-mentioned and other objects, features and advantages of the present application more clearly understood, the following detailed description of the present application will be given.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
fig. 1 shows a flow diagram of a cloud native-based work scheduling method according to an embodiment of the present application;
FIG. 2 is a flow chart of another work scheduling method based on cloud native provided in an embodiment of the present application;
FIG. 3 shows a flow diagram of work orchestration provided by embodiments of the present application;
fig. 4 is a schematic structural diagram of a work arrangement device based on cloud native according to an embodiment of the present application;
fig. 5 shows a schematic structural diagram of another cloud native-based work arrangement device according to an embodiment of the present application.
Detailed Description
The present application will be described in detail hereinafter with reference to the accompanying drawings in conjunction with embodiments. It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other.
In the prior art, each step of task in a workflow engine (argo-workflow) creates a pod in a k8s cluster to execute, the pod is designated by a user and is not fixed, if a user wants to operate a remote host, the user needs to connect to the remote host through ssh first, and then the corresponding task is executed after the operation parameters required by the corresponding task are transferred to the remote host. As can be seen, the disadvantages of the whole execution process include: 1. although each process uses a pod, the user does not care about the content of the pod but needs to specify the pod to be executed; 2. the connection operation of connecting the remote host through ssh does not belong to the work of a business target, manual operation of a user is needed, the flow is complicated, but if only an operation and maintenance tool is used for solving the connection problem, the problem that complex dependency relationship cannot be arranged exists, and the like. The embodiment provides a work arrangement method based on cloud protogenesis, which is used for realizing a cloud protogenesis framework capable of being connected with a remote host based on the function of an operation tool connection plug-in, and the operation tool connection plug-in is combined with an automation operation tool by a workflow engine, namely, the capability of processing complex dependency execution relations and transmitting operation parameters of the workflow engine is utilized, and meanwhile, the automation operation tool is embedded into the workflow, so that a multi-host business target with complex relations is realized. As shown in fig. 1, the method includes:
step 101, embedding a connection plug-in for data interaction in the operation and maintenance tool into a workflow engine. In this embodiment, based on the function that the connection plug-in connection plugins in the automation and maintenance tool analog can be used for communicating with the monitored end, the connection plug-in the automation and maintenance tool analog architecture is extracted, and the connection plug-in is embedded into the workflow engine, so that the workflow engine can have a cloud native architecture automatically connected with the remote host with specified operation, and therefore, the workflow engine and the automation and maintenance are combined.
Step 102, after the workflow service receives the task scheduling request, the workflow engine is driven to distribute different tasks to different working nodes according to execution logic of different remote hosts in the workflow service.
In this embodiment, the workflow service is also called workflow service workflow, and the workflow service can create a work task according to a work task arrangement request of a user, compile to obtain a work CR (first custom resource), perform work arrangement on the work CR, generate a corresponding workflow template, compile to obtain a workflow template CR (second custom resource), and finally generate a corresponding workflow based on the workflow template CR and operation parameters corresponding to the workflow template, and compile to obtain the workflow CR (third custom resource). When the work task is created, the execution list is written according to the created work task and different remote hosts so as to realize the function of remotely controlling the remote hosts. After the self-defining operation is completed, triggering a workflow engine workflow-controller to distribute different work tasks to different work nodes according to execution logic of different remote hosts in the workflow service.
Step 103, the working node connects with a remote host by using the connection plug-in to control the remote host to complete corresponding operation according to the executable file of the working task.
In this embodiment, the user does not need to write a connection script in advance, and based on a designed cloud native architecture with automatic connection with a remote host of a specified operation, remote host information, for example, remote host identification, corresponding login information, and the like, can be directly obtained from the configuration management database CMDB, and the remote host is connected and logged in by using a connection plug-in, so that the remote host can complete a corresponding operation according to a corresponding executable file. According to the requirements of the actual application scenario, the remote host used for executing the work task may be a KVM, which is not specifically limited herein.
Compared with the prior art, the cloud native-based work arrangement method has the advantages that the connection plug-in for data interaction in the operation and maintenance tool is embedded into the workflow engine, after the workflow service receives a work task arrangement request, the workflow engine is driven to distribute different work tasks into different work nodes according to execution logic of different remote hosts in the workflow service, and the work nodes are connected with the remote hosts by using the connection plug-in so as to control the remote hosts to finish corresponding operations according to executable files of the work tasks. Therefore, the cloud native architecture which can be connected with the remote host is realized based on the function of the operation and maintenance tool connection plug-in, so that after the workflow service receives a work task arrangement request, the workflow engine can be driven to distribute different work tasks to different work nodes according to execution logic of different remote hosts in the workflow service, the work nodes can be connected with the remote host by using the cloud native embedded connection plug-in to control the remote host to finish corresponding operations according to executable files of the work tasks, the volume of the cloud native architecture is ensured, the remote host is directly controlled to execute user tasks, and meanwhile, strong work arrangement capacity and file information transfer among various workflows can be supported.
Further, as a refinement and extension of the foregoing embodiment, for a complete description of the process in this embodiment, another work arrangement method based on cloud native is provided, as shown in fig. 2, where the method includes:
step 201, embedding a connection plug-in for data interaction in the operation and maintenance tool into a workflow engine.
In specific implementation, step 201 is consistent with step 101, and based on workflow service bee-workflow of cloud native workflow engine, parallel operation of a plurality of work nodes beeagent-pod can be realized by means of k8s cluster in concurrency performance, which cannot be realized by means of an automated operation and maintenance tool once; on the other hand, in terms of the programming work task, the programming can be realized through a simple grammar under a complex use scene based on a cloud native workflow engine, and the programming can be realized by only relying on a great amount of logic judgment in a playbook required by an automated operation and maintenance tool.
The workflow engine is used as a distributor of the work tasks and is mainly responsible for monitoring the work nodes beeacent-pod and workflow service workflow, and distributing different work tasks to each work node beeacent-pod for execution according to the execution logic specified by the workflow execution list; the working node beeagent-pod is mainly used for copying parameter files required by a working task, and is interacted with the CMDB to acquire connection information of a remote host, so that the remote host is connected, and the remote host is further controlled to execute the corresponding working task; the CMDB is mainly used for storing remote host information for the connection plug-in of the active to connect with the remote host.
Step 202, after the workflow service receives the task orchestration request, the workflow service determines remote host information corresponding to the task orchestration request according to the task orchestration request.
And 203, creating work tasks corresponding to different remote hosts according to the remote host information and the work task arrangement request.
And 204, the workflow service creates a work task according to the work task arrangement request and generates a first custom resource corresponding to the work task in the k8s cluster.
Step 205, performing work arrangement according to the first custom resource, generating a corresponding workflow template, and generating a second custom resource corresponding to the workflow template in the k8s cluster.
Further, to illustrate the specific implementation of step 205, as an optional manner, the step of performing work arrangement according to the first custom resource and generating a corresponding workflow template includes: and based on the type of the work task, performing work arrangement on the first user-defined resource by utilizing the directed acyclic graph, and generating a corresponding workflow template.
And 206, generating a corresponding workflow according to the second custom resource and the operation parameter by acquiring the operation parameter corresponding to the workflow template, and generating a third custom resource corresponding to the workflow in the k8s cluster.
In specific implementation, a work task arrangement request is generated according to the requirement of a user for creating a work task, a workflow service creates the work task according to the received work task arrangement request, creates a work CR (first custom resource) corresponding to the work task in a k8s cluster, and stores the work CR (first custom resource) in a database, such as etcd; based on the front end UI interface, generating a work list corresponding to the work task according to the operation of the form of dragging by the user, generating a work CR list by the workflow service according to the acquired work list, storing the work CR list in a database, continuously creating a workflow template corresponding to the work CR list according to the operation of the form of dragging by the user, creating a workflow template CR (second custom resource) according to the workflow template, and storing the workflow template CR (second custom resource) in the database; and generating a workflow template list corresponding to the workflow template CR according to the form operation of drag and drag of the user, generating a workflow template CR list according to the acquired workflow template list by the workflow service, storing the workflow template CR list into a database, and creating a workflow CR (third custom resource) according to the workflow template CR list and storing the workflow CR (third custom resource) into the database by the workflow service after the workflow starting operation triggered by the workflow operation parameter selected by the user is monitored. It can be seen that the front-end UI-based design facilitates the user to drag and drag the form operation, while supporting the single-step debugging function of the job, the existing stable does not support the above functions.
The creating of the work task specifically comprises the following steps: a) Preparing an executable file: scripts or other binary files; b) Storing the executable file to ceph object storage; c) Ready to perform the required operating parameters: when the parameters are files, uploading the parameters to ceph object storage; d) All files need to indicate the path position of the remote host, so that the execution result is prevented from being inconsistent with the expected result due to the relative path; e) If a product or a parameter needs to be transmitted, the parameter is obtained by reading the file, so that the parameter needs to be output to the file and the path position of the file is designated; f) Selecting a default remote host from the CMDB, or modifying the default remote host before execution; g) Writing an execution list execution-manifest, wherein specific execution logic comprises copying an executable file, copying a parameter file, executing the executable file, obtaining a remote host output result, copying output parameters and copying an output file; h) CR corresponding to the work task is generated in the k8s cluster.
The workflow template arrangement specifically comprises: a) Determining corresponding work tasks according to user operation, and classifying the labels of the work tasks according to types, so that the user can find the labels conveniently; b) Arranging by using a directed acyclic graph, and designating dependent prepositive work; c) Specifying an operating parameter for executing a work task: global parameters, or the parameter out of the previous step, can be used as the current parameter in; d) Modifying the default value of the remote host corresponding to the work task, and modifying again when executing; e) And generating CR corresponding to the workflow template in the k8s cluster.
As shown in fig. 3, according to the requirements of the actual application scenario, the work may include work a (compiled code), work B (deployment task), work C (installer), work D (uninstaller), work E, and the like; editing the work through the operation of dragging the user on the UI interface to obtain the editing relation among the works to generate a workflow template, for example, the step 1 is a work A, the step 2 is a work B and the step 3 is a work C, or the step 1 is a work A, the step 2 is a work B and the step 3 is a parallel relation; and generating corresponding workflows by acquiring different workflow templates and corresponding operation parameters.
Step 207, after the workflow engine monitors the third custom resource, corresponding working nodes are created according to different remote hosts.
And step 208, copying the work task corresponding to the third custom resource from the workflow service, and distributing the work task to the created work node.
And step 209, the working node copies an execution list corresponding to the working task from the workflow service.
Step 210, determining remote host information corresponding to the work task according to the execution list.
Step 211, calling a corresponding connection plug-in according to the connection information in the remote host information to connect to the corresponding remote host.
Step 212, remotely controlling the remote host to execute the corresponding workflow according to the executable file in the execution list.
In the specific implementation, after a workflow engine monitors that a workflow CR is stored in a database, a workflow corresponding to the workflow CR is analyzed to obtain a workflow corresponding to the workflow and a workflow template, and corresponding working nodes are respectively built according to different remote hosts according to the obtained workflow, so that the scheduling of the corresponding working nodes is realized, namely, the workflow corresponding to the workflow CR is copied from a workflow service and distributed to the built working nodes; the working node copies an execution list corresponding to the working task from the workflow service, acquires remote host information corresponding to the working task from the CMDB according to the execution list, and realizes connection with the remote host by calling an embedded connection plug-in according to the connection information in the remote host information, so as to remotely control the connected remote host to execute the corresponding workflow according to an executable file in the execution list; the workflow engine stores the execution state of the workflow output by the remote host into the database so that the workflow service can feed back to the user according to the stored execution state of the workflow.
Wherein, the execution workflow specifically comprises: a) The workflow service Bee-workflow creates a workflow CR according to a workflow template; b) After the Workflow-controller of the Workflow engine monitors the creation of the Workflow CR, triggering the creation of a work node beeacent-pod; c) The working node beeagent-pod pulls the required file from the object storage ceph according to the operation parameters for executing the working task; d) The working node beeagent-pod is connected with a designated remote host; e) The work node beeagent-pod performs the task according to the execution list.
The embodiment is different from the scheme of working arrangement in the prior art that working is performed in units of working, different steps of working are performed in a plurality of remote hosts when working is performed in units of working, when one of the remote hosts fails, the whole working is affected, and different working tasks are created in units of the remote hosts from the operation and maintenance level of the remote hosts, when the subsequent remote hosts fail, maintenance can be performed only for the corresponding failed host, so that maintenance cost and maintenance difficulty are effectively reduced.
By applying the technical scheme of the embodiment, the connection plug-in for data interaction in the operation and maintenance tool is embedded into the workflow engine, and after the workflow service receives the work task arrangement request, the workflow engine is driven to distribute different work tasks into different work nodes according to execution logic of different remote hosts in the workflow service, and the work nodes are connected with the remote hosts by using the connection plug-in so as to control the remote hosts to complete corresponding operations according to executable files of the work tasks. Therefore, the cloud native architecture which can be connected with the remote host is realized based on the function of the operation and maintenance tool connection plug-in, so that after the workflow service receives a work task arrangement request, the workflow engine can be driven to distribute different work tasks to different work nodes according to execution logic of different remote hosts in the workflow service, the work nodes can be connected with the remote host by using the cloud native embedded connection plug-in to control the remote host to finish corresponding operations according to executable files of the work tasks, the volume of the cloud native architecture is ensured, the remote host is directly controlled to execute user tasks, and meanwhile, strong work arrangement capacity and file information transfer among various workflows can be supported.
Further, as a specific implementation of the method of fig. 1, an embodiment of the present application provides a work arrangement device based on cloud native, as shown in fig. 4, where the device includes: an embedding module 31, a driving module 32, and a transmitting module 33.
An embedding module 31 is configured to embed a connection plug-in for data interaction in the operation and maintenance tool into the workflow engine.
And the driving module 32 is used for driving the workflow engine to distribute different work tasks to different work nodes according to the execution logic of different remote hosts in the workflow service after the workflow service receives the work task arrangement request.
And the sending module 33 is used for the working node to connect with a remote host by utilizing the connection plug-in to control the remote host to complete corresponding operation according to the executable file of the working task.
In a specific application scenario, as shown in fig. 5, the driving module 32 includes a task creating unit 321, an orchestration unit 322, and a distribution unit 323.
In a specific application scenario, a task creation unit 321, configured to determine, by using the workflow service according to the task orchestration request, remote host information corresponding to the task orchestration request; and creating work tasks corresponding to different remote hosts according to the remote host information and the work task arrangement request.
In a specific application scenario, an orchestration unit 322, configured to create a work task according to the work task orchestration request by using the workflow service, and generate a first custom resource corresponding to the work task in a k8s cluster; working arrangement is carried out according to the first custom resources, a corresponding workflow template is generated, and a second custom resource corresponding to the workflow template is generated in a k8s cluster; and generating a corresponding workflow according to the second custom resource and the operation parameter by acquiring the operation parameter corresponding to the workflow template, and generating a third custom resource corresponding to the workflow in a k8s cluster.
In a specific application scenario, the working arrangement according to the first custom resource and the generation of the corresponding workflow template specifically include: and based on the type of the work task, performing work arrangement on the first user-defined resource by utilizing the directed acyclic graph, and generating a corresponding workflow template.
In a specific application scenario, the distributing unit 323 is configured to create corresponding working nodes according to different remote hosts after the workflow engine monitors the third custom resource; and copying the work task corresponding to the third custom resource from the workflow service, and distributing the work task to the created work node.
In a specific application scenario, the sending module 33 includes a connection unit 331 and a control module 332.
In a specific application scenario, a connection unit 331, configured to copy, by the work node, an execution list corresponding to a work task from the workflow service; determining remote host information corresponding to the work task according to the execution list; and calling a corresponding connection plug-in according to the connection information in the remote host information so as to connect the corresponding remote host.
In a specific application scenario, the control module 332 is configured to remotely control the remote host to execute a corresponding workflow according to the executable file in the execution list.
It should be noted that, other corresponding descriptions of each functional unit related to the work arrangement device based on cloud native provided in the embodiments of the present application may refer to corresponding descriptions in fig. 1 and fig. 2, and are not repeated herein.
Based on the above methods shown in fig. 1 and fig. 2, correspondingly, the embodiment of the application further provides a computer storage medium, on which a computer program is stored, where the program is executed by a processor to implement the cloud native-based work scheduling method shown in fig. 1 and fig. 2.
Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.), and includes several instructions for causing a computer device (may be a personal computer, a server, or a network device, etc.) to perform the methods described in various implementation scenarios of the present application.
Based on the methods shown in fig. 1 and fig. 2 and the virtual device embodiment shown in fig. 3, in order to achieve the above objects, the embodiments of the present application further provide a computer device, which may specifically be a personal computer, a server, a network device, etc., where the entity device includes a storage medium and a processor; a storage medium storing a computer program; a processor for executing a computer program to implement the cloud native based work orchestration method as shown in fig. 1 and 2 described above.
Optionally, the computer device may also include a user interface, a network interface, a camera, radio Frequency (RF) circuitry, sensors, audio circuitry, WI-FI modules, and the like. The user interface may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), etc., and the optional user interface may also include a USB interface, a card reader interface, etc. The network interface may optionally include a standard wired interface, a wireless interface (e.g., bluetooth interface, WI-FI interface), etc.
It will be appreciated by those skilled in the art that the architecture of a computer device provided in this embodiment is not limited to this physical device, but may include more or fewer components, or may be combined with certain components, or may be arranged in a different arrangement of components.
The storage medium may also include an operating system, a network communication module. An operating system is a program that manages the hardware and software resources of a computer device, supporting the execution of information handling programs, as well as other software and/or programs. The network communication module is used for realizing communication among all components in the storage medium and communication with other hardware and software in the entity equipment.
From the above description of the embodiments, it will be apparent to those skilled in the art that the present application may be implemented by means of software plus necessary general hardware platforms, or may be implemented by hardware. By applying the technical scheme, compared with the prior art, the method and the device have the advantages that the connection plug-in for data interaction in the operation and maintenance tool is embedded into the workflow engine, after the workflow service receives the work task arrangement request, the workflow engine is driven to distribute different work tasks to different work nodes according to execution logic of different remote hosts in the workflow service, and the work nodes are connected with the remote hosts by the connection plug-in so as to control the remote hosts to finish corresponding operations according to executable files of the work tasks. Therefore, the cloud native architecture which can be connected with the remote host is realized based on the function of the operation and maintenance tool connection plug-in, so that after the workflow service receives a work task arrangement request, the workflow engine can be driven to distribute different work tasks to different work nodes according to execution logic of different remote hosts in the workflow service, the work nodes can be connected with the remote host by using the cloud native embedded connection plug-in to control the remote host to finish corresponding operations according to executable files of the work tasks, the volume of the cloud native architecture is ensured, the remote host is directly controlled to execute user tasks, and meanwhile, strong work arrangement capacity and file information transfer among various workflows can be supported.
Those skilled in the art will appreciate that the drawings are merely schematic illustrations of one preferred implementation scenario, and that the modules or flows in the drawings are not necessarily required to practice the present application. Those skilled in the art will appreciate that modules in an apparatus in an implementation scenario may be distributed in an apparatus in an implementation scenario according to an implementation scenario description, or that corresponding changes may be located in one or more apparatuses different from the implementation scenario. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The foregoing application serial numbers are merely for description, and do not represent advantages or disadvantages of the implementation scenario. The foregoing disclosure is merely a few specific implementations of the present application, but the present application is not limited thereto and any variations that can be considered by a person skilled in the art shall fall within the protection scope of the present application.

Claims (10)

1. A cloud-protogenesis-based work arrangement method, comprising:
embedding a connection plug-in for data interaction in the operation and maintenance tool into a workflow engine;
after the workflow service receives the task arrangement request, the workflow engine is driven to distribute different tasks to different working nodes according to execution logic of different remote hosts in the workflow service;
and the working node is connected with a remote host by utilizing the connection plug-in to control the remote host to complete corresponding operation according to the executable file of the working task.
2. The method of claim 1, wherein after the step of the workflow service receiving a work task orchestration request, further comprising:
the workflow service determines remote host information corresponding to the work task arrangement request according to the work task arrangement request;
and creating work tasks corresponding to different remote hosts according to the remote host information and the work task arrangement request.
3. The method of claim 1 or 2, wherein the step of driving the workflow engine to distribute different work tasks into different work nodes according to execution logic of different remote hosts in the workflow service further comprises:
the workflow service creates a work task according to the work task arrangement request, and generates a first custom resource corresponding to the work task in a k8s cluster;
working arrangement is carried out according to the first custom resources, a corresponding workflow template is generated, and a second custom resource corresponding to the workflow template is generated in a k8s cluster;
and generating a corresponding workflow according to the second custom resource and the operation parameter by acquiring the operation parameter corresponding to the workflow template, and generating a third custom resource corresponding to the workflow in a k8s cluster.
4. A method according to claim 3, wherein the step of scheduling work from the first custom resource and generating a corresponding workflow template comprises:
and based on the type of the work task, performing work arrangement on the first user-defined resource by utilizing the directed acyclic graph, and generating a corresponding workflow template.
5. A method according to claim 3, wherein the step of driving the workflow engine to distribute different work tasks to different work nodes according to execution logic of different remote hosts in the workflow service comprises:
after the workflow engine monitors the third custom resource, respectively creating corresponding working nodes according to different remote hosts;
and copying the work task corresponding to the third custom resource from the workflow service, and distributing the work task to the created work node.
6. The method of claim 1, wherein the step of the working node connecting to a remote host using the connection plug-in comprises:
copying an execution list corresponding to a work task from the workflow service by the work node;
determining remote host information corresponding to the work task according to the execution list;
and calling a corresponding connection plug-in according to the connection information in the remote host information so as to connect the corresponding remote host.
7. The method according to claim 1 or 6, wherein said controlling the remote host to perform the corresponding operation steps according to the executable file of the work task comprises:
and remotely controlling the remote host to execute the corresponding workflow according to the executable files in the execution list.
8. Work arrangement device based on cloud primordia, characterized by comprising:
the embedding module is used for embedding a connection plug-in for data interaction in the operation and maintenance tool into the workflow engine;
the driving module is used for driving the workflow engine to distribute different work tasks to different work nodes according to execution logic of different remote hosts in the workflow service after the workflow service receives the work task arrangement request;
and the sending module is used for connecting the working node with a remote host by utilizing the connection plug-in to control the remote host to finish corresponding operation according to the executable file of the working task.
9. A computer storage medium having stored thereon a computer program, wherein the program when executed by a processor implements the cloud native based work orchestration method according to any one of claims 1 to 7.
10. A computer device comprising a storage medium, a processor and a computer program stored on the storage medium and executable on the processor, characterized in that the processor implements the cloud native based work orchestration method according to any one of claims 1 to 7 when the program is executed.
CN202310119191.2A 2023-01-19 2023-01-19 Workflow arrangement method, device, equipment and medium based on cloud protogenesis Pending CN116302398A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310119191.2A CN116302398A (en) 2023-01-19 2023-01-19 Workflow arrangement method, device, equipment and medium based on cloud protogenesis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310119191.2A CN116302398A (en) 2023-01-19 2023-01-19 Workflow arrangement method, device, equipment and medium based on cloud protogenesis

Publications (1)

Publication Number Publication Date
CN116302398A true CN116302398A (en) 2023-06-23

Family

ID=86827888

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310119191.2A Pending CN116302398A (en) 2023-01-19 2023-01-19 Workflow arrangement method, device, equipment and medium based on cloud protogenesis

Country Status (1)

Country Link
CN (1) CN116302398A (en)

Similar Documents

Publication Publication Date Title
KR20220084031A (en) AI-powered process identification, extraction and automation for robotic process automation
WO2018036342A1 (en) Csar-based template design visualization method and device
US8281187B1 (en) Unified and extensible meta-testing framework
US10817819B2 (en) Workflow compilation
US11977470B2 (en) Monitoring long running workflows for robotic process automation
CN111984261A (en) Compiling method and compiling system
US8296723B2 (en) Configurable unified modeling language building blocks
EP3413149B1 (en) Field device commissioning system and field device commissioning method
CN110990048A (en) Method and system for monitoring resource loss of Unity project
KR20200082839A (en) Process Editor Apparatus and Method for Robot Process Automation
CN114297056A (en) Automatic testing method and system
US20230126168A1 (en) Scalable visualization of a containerized application in a multiple-cluster and multiple deployment application environment
US20170364390A1 (en) Automating enablement state inputs to workflows in z/osmf
US11086696B2 (en) Parallel cloned workflow execution
US9053084B1 (en) Self-service testing
CN114265595B (en) Cloud native application development and deployment system and method based on intelligent contracts
CN116302398A (en) Workflow arrangement method, device, equipment and medium based on cloud protogenesis
KR102583146B1 (en) Different types of multi-rpa integrated management systems and methods
CN117806654B (en) Tekton-based custom cloud native DevOps pipeline system and method
US20240210903A1 (en) Software Development (DevOps) Pipelines for Robotic Process Automation
CN114363400B (en) Cloud platform-based application programming method and device and computer-readable storage medium
JP7323755B2 (en) Information processing system, its control method and program
US20220091908A1 (en) Filter instantiation for process graphs of rpa workflows
CN118170367A (en) Workflow operation method and device, electronic equipment and medium
KR20240058354A (en) Apparatus for executing workflow to perform distributed processing analysis tasks in a container environment and method for the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination