US20200272433A1 - Workflow engine tool - Google Patents

Workflow engine tool Download PDF

Info

Publication number
US20200272433A1
US20200272433A1 US16/285,180 US201916285180A US2020272433A1 US 20200272433 A1 US20200272433 A1 US 20200272433A1 US 201916285180 A US201916285180 A US 201916285180A US 2020272433 A1 US2020272433 A1 US 2020272433A1
Authority
US
United States
Prior art keywords
workflow
workflow engine
modules
engine
generated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US16/285,180
Other versions
US10768908B1 (en
Inventor
Yu Wang
Yu Hu
Haiyuan Cao
Hui Su
Jinchao Li
Xinying Song
Jianfeng Gao
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Technology Licensing LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Technology Licensing LLC filed Critical Microsoft Technology Licensing LLC
Priority to US16/285,180 priority Critical patent/US10768908B1/en
Priority to CN202080016434.4A priority patent/CN113826070A/en
Priority to EP20705877.7A priority patent/EP3931684B1/en
Priority to PCT/US2020/014688 priority patent/WO2020176177A1/en
Priority to US16/945,321 priority patent/US11327726B2/en
Publication of US20200272433A1 publication Critical patent/US20200272433A1/en
Application granted granted Critical
Publication of US10768908B1 publication Critical patent/US10768908B1/en
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC reassignment MICROSOFT TECHNOLOGY LICENSING, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CAO, HAIYUAN, SONG, XINYING, WANG, YU, GAO, JIANFENG, HU, YU, LI, JINCHAO, SU, HUI
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/35Creation or generation of source code model driven
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • G06F16/9024Graphs; Linked lists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • G06F8/43Checking; Contextual analysis
    • G06F8/433Dependency analysis; Data or control flow analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/70Software maintenance or management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0633Workflow analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/10Requirements analysis; Specification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes

Definitions

  • a directed acyclic graph defines a workflow pipeline for exploiting data to produce some desired results.
  • users e.g., engineers and scientists
  • GUI graphical user interface
  • Neither of these approaches is natural or intuitive, and both are prone to error.
  • Generating, adapting, and reviewing the generation of DAGs can introduce significant overhead effort when DAGs become large and complex.
  • a workflow engine tool that enables scientists and engineers to programmatically author workflows (e.g., a directed acyclic graph, “DAG”) with nearly no overhead, using a simple script that needs almost no modifications for portability among multiple different workflow engines. This permits users to focus on the business logic of the project, avoiding the distracting tedious overhead related to workflow management (such as uploading modules, drawing edges, setting parameters, and other tasks).
  • the workflow engine tool provides an abstraction layer on top of workflow engines, introducing a binding function that converts a programming language function (e.g., a normal python function) into a workflow module definition.
  • the workflow engine tool infers module instances and induces edge dependencies automatically by inferring from a programming language script to build a DAG.
  • An example workflow engine tool comprises: a processor; a computer-readable medium storing instructions that are operative when executed by the processor to: extract module definitions from a programming language script; extract execution flow information from the programming language script; generate, for a first workflow engine, modules from the extracted module definitions; generate, for the first workflow engine, edges for connecting the modules generated for the first workflow engine; and connect the modules generated for the first workflow engine with the edges generated for the first workflow engine, based at least on the extracted execution flow information, to generate a first workflow pipeline for the first workflow engine.
  • FIG. 1 illustrates an arrangement that can advantageously use an exemplary workflow engine tool
  • FIG. 2 shows an example workflow module authoring page
  • FIG. 3 shows a portion of an example script used by the workflow engine tool of FIG. 1 ;
  • FIG. 4 shows another portion of the example script of FIG. 3 ;
  • FIG. 5 illustrates intermediate results of the workflow engine tool of FIG. 1 operating on the script shown in FIGS. 3 and 4 ;
  • FIG. 6 illustrates extraction of information in various data fields, by the workflow engine tool of FIG. 1 ;
  • FIG. 7 is a flowchart of operations associated with operating the workflow engine tool of FIG. 1 ;
  • FIG. 8 is another flowchart of operations associated with operating the workflow engine tool of FIG. 1 ;
  • FIG. 9 is a block diagram of an example computing environment suitable for implementing some of the various examples disclosed herein.
  • a workflow consists of an orchestrated and repeatable pattern of business activity enabled by the systematic organization of resources into processes that transform materials, provide services, or process information, and can be depicted as a sequence of operations, the work of a person or group, the work of an organization of staff, or one or more simple or complex mechanisms.
  • Flow may refer to a document, service, or product that is being transferred from one step to another.
  • workflow management systems may be viewed as one fundamental building block to be combined with other parts of an organization's structure such as information technology, teams, projects and hierarchies.
  • the benefits of workflow management systems include:
  • workflow pipelines include:
  • a directed acyclic graph defines a workflow pipeline for exploiting data to produce some desired results.
  • users e.g., engineers and scientists
  • GUI graphical user interface
  • Neither of these approaches is natural or intuitive, and both are prone to error.
  • Generating, adapting, and reviewing the generation of DAGs can introduce significant overhead effort when DAGs become large and complex.
  • Manual creation of job modules requires specifying edges to connect modules into DAGs, which is tedious, labor-intense, and error-prone.
  • a workflow engine tool that enables scientists and engineers to programmatically author workflows (e.g., DAGs) with nearly no overhead, using a simpler script that needs almost no modifications for portability among multiple different workflow engines. This permits user to focus on the business logic of the project, avoiding the distracting tedious overhead related to workflow management (such as uploading modules, drawing edges, setting parameters, and other tasks).
  • the workflow engine tool provides an abstraction layer on top of workflow engines, introducing a binding function that converts a programming language function (e.g., a normal python function) into a workflow module definition.
  • the workflow engine tool infers module instances and induces edge dependencies automatically by inferring from a programming language script to build a DAG.
  • the workflow engine tool induces workflow graphs from a programming script (e.g., a python script) such that module definition is inferred from a function definition (e.g., a function definition including doc strings), including inducing inputs, parameters, and outputs.
  • Module instances tasks and nodes
  • Edges and execution dependencies are inferred based on variables generated from one function call and fed as inputs and parameters into other functions.
  • the workflow engine tool thus creates an abstraction layer that serves as a bridge between user's script and a target workflow engine.
  • the script can generate DAGs on multiple run different workflow engines without burdensome changes.
  • the same script can even run in the absence of a workflow engine.
  • the workflow engine tool automatically resolves task and data dependency, handles data persistence caching natively, permits the same script to run locally and on a workflow cluster—for multiple different workflow engines. This eliminates the need to manually create DAGs or program in special APIs.
  • FIG. 1 illustrates an arrangement 100 that can advantageously use an exemplary workflow engine tool 102 .
  • Workflow engine tool 102 includes multiple adapters 104 a - 104 d that correspond to multiple different workflow engines 106 a - 106 d , permitting a common script 122 in programming environment 120 to run on any of multiple different workflow engines 106 a - 106 d .
  • a different number of workflow engines can be supported.
  • at least one of workflow engines 106 b - 106 d is a dummy abstraction layer, permitting workflow engine tool 102 to run in the absence of a workflow engine.
  • a DAG 110 is generated for workflow engine 106 a , and includes nodes 112 a - 112 c , along with edges 114 a and 114 b that connect outputs 116 a and 116 b , of nodes 112 a and 112 b respectively, with inputs 118 c and 118 d of node 112 c .
  • node 112 c also has an output 116 c.
  • a workflow DAG (e.g., DAG 110 ) consists of nodes (e.g., nodes 112 a - 112 c ) and directed edges (e.g., edges 114 a and 114 b ).
  • a node e.g., a module
  • Each node has zero or more inputs, zero or more parameters, and zero or more outputs. Inputs and outputs are the data passing from one node to another.
  • the data exchanged can be in multiple different forms, such as memory objects, shared files, distributed files, and others.
  • One node's output may become the input of another node, which at least partially dictates the execution dependency between nodes. For example, since node 112 c has input 118 c , which comes from output 116 a of node 112 a , then node 112 c cannot be executed until node 112 a has been successfully executed and produced output 116 a .
  • the dependency between nodes can be denoted as edges in the workflow DAG.
  • a module may have multiple inputs and multiple outputs. Often, each input and output is associated with a unique name as its identifier.
  • a parameter is essential data that is needed for a particular execution run of a node, but yet is not output by another node.
  • a node for training an AI model may have parameters such as “number of epochs” or “learning rate”. Both inputs and parameters are essential data that is needed for executing a node.
  • the difference between inputs and parameters is that inputs are data that is generated dynamically by upstream nodes during the run, and therefore dictate execution flow dependency (edges) between nodes, whereas parameters are data that is not generated by upstream nodes, but is instead but specified by users prior to the run. Therefore, parameters do not affect the dependency between nodes. Often parameters are some simple values which can be easily specified.
  • the user has a choice of specifying data as an input or a parameter. For example, if data is complex, then even though it can be a parameter, a function may be written to generate it, therefore rendering the data into an input. Also, in some situations, input, output, and parameters may have enforced data types, such as integer, floating point number, string, date-time, or other pre-defined data types. In such situations, when an output is connected to an input, the data type should be consistent.
  • Edges are directed (directional, pointing from an output to an input), and therefore dictate at least some aspects of execution dependency between nodes (modules and tasks).
  • the edges in a DAG must be defined such that a DAG is acyclic, in order to permit determination of execution order. Therefore, since node 112 c has an input 118 c that comes from output 116 a of node 112 a (through edge 114 a ), node 112 a cannot have an input that depends (either directly, or through intervening modules) on output 116 c of node 112 c .
  • there could be multiple edges between two modules because edges connect inputs to outputs, and the modules may each have multiple inputs or outputs. However, to prevent cyclical dependencies, all edges should have the same direction, in such situations.
  • a single output may connect to multiple inputs, when data generated by one module is consumed by multiple other modules. However, an input of a module can come from only one source.
  • An execution of a DAG pipeline represents a particular run of the DAG. Parameters must be pre-specified before the DAG can be executed, and the DAG may also require some data from external data sources, which is brought in to the DAG by some loading modules. DAG runs are deterministic, meaning that, given the same input data and parameters, and final output will be consistent for multiple execution runs.
  • a workflow engine e.g., workflow engine 106 a
  • Programming environment 120 represents a generic programming environment, such as python or C#.
  • An execution module 130 permits script 122 , which is written in a high-level programming language, to execute. For line-by-line interpreted languages, for example, execution module 130 provides an executing virtual machine.
  • programming environment 120 is a python environment.
  • Script 122 is illustrated as including three parts: a workflow identifier and decorator 124 , a function definition portion 126 (e.g., a function definition including doc strings), and a program logic portion 128 . Python provides for a decorator, which is a function that takes another function and extends the behavior of the latter function without explicitly modifying it.
  • workflow identifier and decorator 124 is implemented as a “bind” function, which is a single line of line code to convert a function into a module definition that can be recognized and used in workflow engine.
  • a “bind” function is a single line of line code to convert a function into a module definition that can be recognized and used in workflow engine.
  • An example and further detail will be provided in reference to FIG. 5 .
  • Workflow identifier and decorator 124 enables workflow engine tool 102 to select a target one of adapters 104 a - 104 d , so that workflow engine tool 102 can create different engine wrappers for different underlying workflow engines.
  • components 142 - 152 will be described in relation to FIG. 5 , after the descriptions of FIGS. 2 through 4 .
  • FIG. 2 shows an example workflow module authoring page 200 .
  • Workflow module authoring page 200 is used for manually generating modules for DAGs and has multiple data fields for a user to complete.
  • Workflow engine tool 102 is able to extract, infer, or otherwise generate information from script 122 that corresponds to multiple data fields in workflow module authoring page 200 , thereby rendering workflow module authoring page 200 obsolete in various use case scenarios.
  • FIG. 3 shows a script portion 300 that provides function definitions for use with workflow engine tool 102 , in which a user leverages workflow engine tool 102 to use normal programming language syntax to specify DAGs.
  • script portion 300 forms an example of function definition portion 126 .
  • the illustrated example of script portion 300 uses the following functions for an exemplary AI training and testing operation:
  • script portion 300 is:
  • the function parameters correspond to module inputs and module parameters, because they are the data needed for a function to run in a general programming environment and a module to run in a workflow environment. Function also have returns, which correspond to the module outputs. Workflow engine tool 102 frees up the users from having to manually specify inputs, parameters, and outputs of module, by extracting this data from script portion 300 .
  • the function definitions include doc strings that specify: (1) which function parameters are module inputs, (2) which function parameters are the module parameters, and (3) what the function returns as the module output or outputs.
  • the function test( ) must execute after both prepare_data( ) and train( ), and train( ) must execute after prepare_data( ). This means that the order of execution is prepare_data( ), then train( ), and then test( ).
  • Workflow engine tool 102 extracts this information in order to generate modules with the correct inputs, parameters, and outputs, and then generate edges to connect the modules.
  • FIG. 4 shows a script portion 400 that may be combined with script portion 300 for use with workflow engine tool 102 .
  • script portion 400 provides program logic that advantageously uses the function definitions provided in script portion 300 , and forms an example of program logic portion 128 .
  • the illustrated example of script portion 400 uses the above-described functions for the exemplary AI training and testing operation:
  • FIG. 5 illustrates intermediate results of workflow engine tool 102 operating on a script 522 .
  • a script 522 includes script portions 300 and 400 (of FIGS. 3 and 4 , respectively), along with workflow identifier and decorator 124 that specifies adapter 104 a (for workflow engine 106 a ) and invokes a python decorator to convert the defined functions into workflow module definitions.
  • workflow identifier and decorator 124 is:
  • a DAG 510 has a node 512 a that corresponds to the prepare_data( ) function described above, a node 512 b that corresponds to the train( ) function described above, and a node 512 c that corresponds to the test( ) function described above. Additionally, edges are illustrated between the nodes. For simplicity of illustration, edges 514 a and 514 b are shown as single edges, although there would be two of each, to match the four values outputs of node 512 a (train_x, train_y, test_x, and test_y), which is shown as a single output 516 a , for clarity of illustration.
  • edge 514 a connects test_x, and test_y from output 516 a to an input 518 a (there would actually be two) of node 512 c ;
  • edge 514 b connects train_x, and train_y from output 516 a to an input 518 b (there would actually be two) of node 512 b ;
  • edge 514 c connects classifier from output 516 b of node 512 b to an input 518 c of node 512 c . This is illustrated in FIG. 6 , along with a parameter 618 for node 512 b , and an output 616 from node 512 c.
  • a module extraction component 142 uses the function definitions in script 522 (e.g., script portion 300 ) to extract information used by a module generation component 144 to generate modules (e.g., nodes 512 a - 512 c ).
  • modules e.g., nodes 512 a - 512 c
  • modules When the modules are created, they will need label, or a module ID, in order to be addressed and handled by workflow engine 106 a .
  • One way to generate module IDs, for assigning to nodes 512 a - 512 c is to use upstream dependencies.
  • node 512 a is upstream from both of nodes 512 b and 512 c
  • node 512 b is upstream from node 512 c
  • a module ID can then be generated for node 512 c using information from nodes 512 a and 512 c , for example by combining their module IDs and passing them through a function that is unlikely to produce collisions for module IDs.
  • a workflow interface component 150 sends the generated modules to workflow engine 106 a .
  • An execution flow extraction component 146 uses the input and output information to extract execution flow information, and an edge generation component 148 generates edges (e.g., edges 514 a - 514 c ) that connect outputs (e.g., outputs 516 a and 516 b ) with inputs (e.g., inputs ( 518 a - 518 c ) of the modules generated by module generation component 144 .
  • Workflow interface component 150 sends the generated modules to workflow engine 106 a .
  • a workflow decorator component 152 binds the functions according to the specific workflow engine targeted (e.g., workflow engine 106 a ), so that DAG 510 can be generated.
  • Workflow interface component 150 executes the DAG 510 and the underlying system calls for workflow engine 106 a.
  • a series of function calls can be grouped into a more complex structure, for example by creating sub-graphs and using the sub-graphs in a top-level workflow pipeline.
  • An example abbreviated script, using the previously-identified functions is:
  • DAGs can be created as needed, based on conditional branching logic, such as if-then branching and for-loops—subject to the information necessary to compiling a DAG being available at the time the information is to be extracted from the script.
  • conditional branching logic such as if-then branching and for-loops—subject to the information necessary to compiling a DAG being available at the time the information is to be extracted from the script.
  • the decorator functionality permits the use of workflow-specific functionality, identified in some parameters.
  • some workflows may support a mode specification, according to the following example modification of script portion 400 (of FIG. 4 ):
  • workflow engine tool 102 When workflow engine tool 102 reads the additional parameters, it will recognize them as work-flow specific parameters, based on the original parameters of the module functions in the doc string.
  • a dummy wrapper binds the functions, but eliminates the additional parameters and passes only the original parameters to the original function. In this way, by just changing one line, script 522 could run normally without any further changes.
  • FIG. 7 is a flowchart 700 of operations associated with operating workflow engine tool 102 .
  • workflow engine tool 102 drastically simplifies workflow generation for users. Rather than manually creating modules (e.g., using a workflow module authoring page 200 for each module), and then manually connecting the modules, a user merely needs to perform operations 702 through 710 , as illustrated,
  • operation 702 the user develops task functions, including doc string definitions, producing an equivalent of script portion 300 (of FIG. 3 ).
  • operation 704 the user writes the program logic in a generic programming language, producing an equivalent of script portion 400 (of FIG. 4 ).
  • the user identifies the target workflow engine, and binds the functions, using workflow identifier and decorator 124 , in operation 708 . This permits the user to generate a workflow, merely by running the script, in operation 710 .
  • FIG. 8 is a flowchart 800 of operations associated with operating workflow engine tool 102 .
  • the following operations are implemented as computer-executable instructions stored on and executed by computing device 900 .
  • Operation 802 includes writing a programming language script in a programming environment.
  • the programming language includes a python programming language and the programming language script is a python script.
  • C# is used.
  • Operation 804 includes a workflow engine tool receiving the programming language script.
  • Operation 806 includes extracting module definitions from the programming language script, and operation 808 includes extracting execution flow information from the programming language script.
  • Operation 810 then includes generating, for the workflow engine, modules from the extracted module definitions. In some examples, this includes using a binding function.
  • Operation 812 includes generating module IDs, for modules generated for the workflow engine, based at least on upstream dependencies.
  • Operation 814 includes generating, for the workflow engine, edges for connecting the modules generated for the workflow engine, and operation 816 includes connecting the modules generated for the workflow engine with the edges generated for the workflow engine, based at least on the extracted execution flow information, to generate a workflow pipeline for the workflow engine.
  • the workflow pipeline comprises a DAG.
  • Operation 818 includes recognizing a workflow-specific parameter. If the targeted workflow engine is supported, as determined by decision operation 820 , then operation 822 includes passing the workflow-specific parameter to the workflow engine. Otherwise, operation 824 eliminates the workflow-specific parameter, and does not pass it. Operation 826 then includes running the workflow engine to execute the workflow pipeline.
  • operation 830 includes slightly modifying the programming language script for the second workflow engine. In some examples, this is as simple as modifying a single line (e.g., workflow identifier and decorator 124 in FIGS. 1 and 5 ) and then flowchart 800 returns to operation 804 , where the workflow engine tool receiving the slightly modified programming language script.
  • Some aspects and examples disclosed herein are directed to a workflow engine tool comprising: a processor; a computer-readable medium storing instructions that are operative when executed by the processor to: extract module definitions from a programming language script; extract execution flow information from the programming language script; generate, for a first workflow engine, modules from the extracted module definitions; generate, for the first workflow engine, edges for connecting the modules generated for the first workflow engine; and connect the modules generated for the first workflow engine with the edges generated for the first workflow engine, based at least on the extracted execution flow information, to generate a first workflow pipeline for the first workflow engine.
  • Additional aspects and examples disclosed herein are directed to a process for workflow engine management comprising: extracting module definitions from a programming language script; extracting execution flow information from the programming language script; generating, for a first workflow engine, modules from the extracted module definitions; generating, for the first workflow engine, edges for connecting the modules generated for the first workflow engine; and connecting the modules generated for the first workflow engine with the edges generated for the first workflow engine, based at least on the extracted execution flow information, to generate a first workflow pipeline for the first workflow engine.
  • Additional aspects and examples disclosed herein are directed to one or more computer storage devices having computer-executable instructions stored thereon for workflow engine management, which, on execution by a computer, cause the computer to perform operations comprising: extracting module definitions from a programming language script; extracting execution flow information from the programming language script; generating, for a first workflow engine, modules from the extracted module definitions; generating, for the first workflow engine, edges for connecting the modules generated for the first workflow engine; connecting the modules generated for the first workflow engine with the edges generated for the first workflow engine, based at least on the extracted execution flow information, to generate a first workflow pipeline for the first workflow engine; generating, for a second workflow engine, modules from the extracted module definitions; generating, for the second workflow engine, edges for connecting the modules generated for the second workflow engine; and connecting the modules generated for the second workflow engine with the edges generated for the second workflow engine, based at least on the extracted execution flow information, to generate a second workflow pipeline for the second workflow engine.
  • examples include any combination of the following:
  • FIG. 9 is a block diagram of example computing device 900 for implementing aspects disclosed herein, and is designated generally as computing device 900 .
  • Computing device 900 is but one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the examples disclosed herein. Neither should computing device 900 be interpreted as having any dependency or requirement relating to any one or combination of components/modules illustrated.
  • the examples disclosed herein may be described in the general context of computer code or machine-useable instructions, including computer-executable instructions such as program components, being executed by a computer or other machine, such as a personal data assistant or other handheld device.
  • program components including routines, programs, objects, components, data structures, and the like, refer to code that performs particular tasks, or implement particular abstract data types.
  • the disclosed examples may be practiced in a variety of system configurations, including personal computers, laptops, smart phones, mobile tablets, hand-held devices, consumer electronics, specialty computing devices, etc.
  • the disclosed examples may also be practiced in distributed computing environments when tasks are performed by remote-processing devices that are linked through a communications network.
  • Computing device 900 includes a bus 910 that directly or indirectly couples the following devices and components: computer-storage memory 912 , one or more processors 914 , one or more presentation components 916 , input/output (I/O) ports 918 , I/O components 920 , a power supply 922 , and a network component 924 .
  • Computing device 900 should not be interpreted as having any dependency or requirement related to any single component or combination of components illustrated therein. While computing device 900 is depicted as a seemingly single device, multiple computing devices 900 may work together and share the depicted device resources. For instance, memory 912 may be distributed across multiple devices, processor(s) 914 may provide housed on different devices, and so on.
  • Bus 910 represents what may be one or more busses (such as an address bus, data bus, or a combination thereof). Although the various blocks of FIG. 9 are shown with lines for the sake of clarity, in reality, delineating various components is not so clear, and metaphorically, the lines would more accurately be grey and fuzzy. For example, one may consider a presentation component such as a display device to be an I/O component. Also, processors have memory. Such is the nature of the art, and reiterate that the diagram of FIG. 9 is merely illustrative of an exemplary computing device that can be used in connection with one or more disclosed examples. Distinction is not made between such categories as “workstation,” “server,” “laptop,” “hand-held device,” etc., as all are contemplated within the scope of FIG.
  • Memory 912 may take the form of the computer-storage media references below and operatively provide storage of computer-readable instructions, data structures, program modules and other data for computing device 900 .
  • memory 912 may store an operating system, a universal application platform, or other program modules and program data.
  • Memory 912 may be used to store and access instructions configured to carry out the various operations disclosed herein.
  • memory 912 may include computer-storage media in the form of volatile and/or nonvolatile memory, removable or non-removable memory, data disks in virtual environments, or a combination thereof.
  • Memory 912 may include any quantity of memory associated with or accessible by computing device 900 .
  • Memory 912 may be internal to computing device 900 (as shown in FIG. 9 ), external to computing device 900 (not shown), or both (not shown).
  • Examples of memory 912 in include, without limitation, random access memory (RAM); read only memory (ROM); electronically erasable programmable read only memory (EEPROM); flash memory or other memory technologies; CD-ROM, digital versatile disks (DVDs) or other optical or holographic media; magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices; memory wired into an analog computing device; or any other medium for encoding desired information and for access by computing device 900 . Additionally, or alternatively, memory 912 may be distributed across multiple computing devices 900 , e.g., in a virtualized environment in which instruction processing is carried out on multiple computing devices 900 .
  • “computer storage media,” “computer-storage memory,” “memory,” and “memory devices” are synonymous terms for memory 912 , and none of these terms include carrier waves or propagating signaling.
  • Processor(s) 914 may include any quantity of processing units that read data from various entities, such as memory 912 or I/O components 920 . Specifically, processor(s) 914 are programmed to execute computer-executable instructions for implementing aspects of the disclosure. The instructions may be performed by the processor, by multiple processors within computing device 900 , or by a processor external to computing device 900 . In some examples, processor(s) 914 are programmed to execute instructions such as those illustrated in the flowcharts discussed below and depicted in the accompanying drawings. Moreover, in some examples, processor(s) 914 represent an implementation of analog techniques to perform the operations described herein. For example, the operations may be performed by an analog client computing device 900 and/or a digital client computing device 900 .
  • Presentation component(s) 916 present data indications to a user or other device.
  • Exemplary presentation components include a display device, speaker, printing component, vibrating component, etc.
  • GUI graphical user interface
  • Ports 918 allow computing device 900 to be logically coupled to other devices including I/O components 920 , some of which may be built in.
  • Example I/O components 920 include, for example but without limitation, a microphone, joystick, game pad, satellite dish, scanner, printer, wireless device, etc.
  • Computing device 900 may operate in a networked environment via the network component 924 using logical connections to one or more remote computers.
  • the network component 924 includes a network interface card and/or computer-executable instructions (e.g., a driver) for operating the network interface card. Communication between the computing device 900 and other devices may occur using any protocol or mechanism over any wired or wireless connection.
  • network component 924 is operable to communicate data over public, private, or hybrid (public and private) using a transfer protocol, between devices wirelessly using short range communication technologies (e.g., near-field communication (NFC), BluetoothTM branded communications, or the like), or a combination thereof.
  • NFC near-field communication
  • BluetoothTM BluetoothTM branded communications, or the like
  • network component 924 communicates over communication link 930 with network 932 with a cloud resource 934 .
  • communication link 930 include a wireless connection, a wired connection, and/or a dedicated link, and in some examples, at least a portion is routed through the internet.
  • cloud resource 934 performs at least some of the operations described herein for computing device 900 .
  • examples of the disclosure are capable of implementation with numerous other general-purpose or special-purpose computing system environments, configurations, or devices.
  • Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with aspects of the disclosure include, but are not limited to, smart phones, mobile tablets, mobile computing devices, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, gaming consoles, microprocessor-based systems, set top boxes, programmable consumer electronics, mobile telephones, mobile computing and/or communication devices in wearable or accessory form factors (e.g., watches, glasses, headsets, or earphones), network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, virtual reality (VR) devices, holographic devices, and the like.
  • Such systems or devices may accept input from the user in any way, including from input devices such as a keyboard or pointing device, via gesture input, proximity input (such as by hovering), and/or via voice input.
  • Examples of the disclosure may be described in the general context of computer-executable instructions, such as program modules, executed by one or more computers or other devices in software, firmware, hardware, or a combination thereof.
  • the computer-executable instructions may be organized into one or more computer-executable components or modules.
  • program modules include, but are not limited to, routines, programs, objects, components, and data structures that perform particular tasks or implement particular abstract data types.
  • aspects of the disclosure may be implemented with any number and organization of such components or modules. For example, aspects of the disclosure are not limited to the specific computer-executable instructions or the specific components or modules illustrated in the figures and described herein. Other examples of the disclosure may include different computer-executable instructions or components having more or less functionality than illustrated and described herein.
  • aspects of the disclosure transform the general-purpose computer into a special-purpose computing device when configured to execute the instructions described herein.
  • Computer readable media comprise computer storage media and communication media.
  • Computer storage media include volatile and nonvolatile, removable and non-removable memory implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules, or the like.
  • Computer storage media are tangible and mutually exclusive to communication media.
  • Computer storage media are implemented in hardware and exclude carrier waves and propagated signals. Computer storage media for purposes of this disclosure are not signals per se.
  • Exemplary computer storage media include hard disks, flash drives, solid-state memory, phase change random-access memory (PRAM), static random-access memory (SRAM), dynamic random-access memory (DRAM), other types of random-access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, compact disk read-only memory (CD-ROM), digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information for access by a computing device.
  • communication media typically embody computer readable instructions, data structures, program modules, or the like in a modulated data signal such as a carrier wave or other transport mechanism and include any information delivery media.

Abstract

A workflow engine tool is disclosed that enables scientists and engineers to programmatically author workflows (e.g., a directed acyclic graph, “DAG”) with nearly no overhead, using a simpler script that needs almost no modifications for portability among multiple different workflow engines. This permits users to focus on the business logic of the project, avoiding the distracting tedious overhead related to workflow management (such as uploading modules, drawing edges, setting parameters, and other tasks). The workflow engine tool provides an abstraction layer on top of workflow engines, introducing a binding function that converts a programming language function (e.g., a normal python function) into a workflow module definition. The workflow engine tool infers module instances and induces edge dependencies automatically by inferring from a programming language script to build a DAG.

Description

    BACKGROUND
  • In the era of big data and artificial intelligence (AI), intelligent use of data has become an important factor in the success of many businesses. Data often forms a foundation for advanced analytics, AI, and business operation efficiency. As more businesses become data-driven and data volume grows rapidly, there is an increasing need to manage and execute complicated data processing pipelines that extract data from various sources, transform it for consumption (e.g., extracting features and training AI models), and storing it for subsequent uses. Workflow engines are often used to manage data workflow pipelines at scale.
  • Despite the benefits of workflow engines, full utilization of workflow engines remains burdensome, due to steep learning curves and the effort needed to author complicated workflow pipelines. A directed acyclic graph (DAG) defines a workflow pipeline for exploiting data to produce some desired results. Typically, users (e.g., engineers and scientists) interact with a graphical user interface (GUI) to manually compose DAGs, or must learn a special syntax to generate DAGs programmatically. Neither of these approaches is natural or intuitive, and both are prone to error. Generating, adapting, and reviewing the generation of DAGs can introduce significant overhead effort when DAGs become large and complex.
  • SUMMARY
  • The disclosed examples are described in detail below with reference to the accompanying drawing figures listed below. The following summary is provided to illustrate some examples disclosed herein. It is not meant, however, to limit all examples to any particular configuration or sequence of operations.
  • A workflow engine tool is disclosed that enables scientists and engineers to programmatically author workflows (e.g., a directed acyclic graph, “DAG”) with nearly no overhead, using a simple script that needs almost no modifications for portability among multiple different workflow engines. This permits users to focus on the business logic of the project, avoiding the distracting tedious overhead related to workflow management (such as uploading modules, drawing edges, setting parameters, and other tasks). The workflow engine tool provides an abstraction layer on top of workflow engines, introducing a binding function that converts a programming language function (e.g., a normal python function) into a workflow module definition. The workflow engine tool infers module instances and induces edge dependencies automatically by inferring from a programming language script to build a DAG.
  • An example workflow engine tool comprises: a processor; a computer-readable medium storing instructions that are operative when executed by the processor to: extract module definitions from a programming language script; extract execution flow information from the programming language script; generate, for a first workflow engine, modules from the extracted module definitions; generate, for the first workflow engine, edges for connecting the modules generated for the first workflow engine; and connect the modules generated for the first workflow engine with the edges generated for the first workflow engine, based at least on the extracted execution flow information, to generate a first workflow pipeline for the first workflow engine.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The disclosed examples are described in detail below with reference to the accompanying drawing figures listed below:
  • FIG. 1 illustrates an arrangement that can advantageously use an exemplary workflow engine tool;
  • FIG. 2 shows an example workflow module authoring page;
  • FIG. 3 shows a portion of an example script used by the workflow engine tool of FIG. 1;
  • FIG. 4 shows another portion of the example script of FIG. 3;
  • FIG. 5 illustrates intermediate results of the workflow engine tool of FIG. 1 operating on the script shown in FIGS. 3 and 4;
  • FIG. 6 illustrates extraction of information in various data fields, by the workflow engine tool of FIG. 1;
  • FIG. 7 is a flowchart of operations associated with operating the workflow engine tool of FIG. 1;
  • FIG. 8 is another flowchart of operations associated with operating the workflow engine tool of FIG. 1; and
  • FIG. 9 is a block diagram of an example computing environment suitable for implementing some of the various examples disclosed herein.
  • Corresponding reference characters indicate corresponding parts throughout the drawings.
  • DETAILED DESCRIPTION
  • The various examples will be described in detail with reference to the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. References made throughout this disclosure relating to specific examples and implementations are provided solely for illustrative purposes but, unless indicated to the contrary, are not meant to limit all examples.
  • In the era of big data and artificial intelligence (AI), intelligent use of data has become an important factor in the success of many businesses. Data often forms a foundation for advanced analytics, AI, and business operation efficiency. As more businesses become data-driven and data volume grows rapidly, there is an increasing need to manage and execute complicated data processing pipelines that extract data from various sources, transform it for consumption (e.g., extracting features and training AI models), and store it for subsequent uses. Workflow engines are often used to manage data workflow pipelines at scale. A workflow consists of an orchestrated and repeatable pattern of business activity enabled by the systematic organization of resources into processes that transform materials, provide services, or process information, and can be depicted as a sequence of operations, the work of a person or group, the work of an organization of staff, or one or more simple or complex mechanisms. Flow may refer to a document, service, or product that is being transferred from one step to another.
  • Workflows may be viewed as one fundamental building block to be combined with other parts of an organization's structure such as information technology, teams, projects and hierarchies. The benefits of workflow management systems include:
      • managing dependencies between jobs;
      • orchestration across heterogeneous computing clusters;
      • transparency, reproducibility, reusability (sharing of modules and data), collaboration, rapid development;
      • failure handling, alerts, retrospection, and scheduling
      • parity between offline experiments and online production;
      • visualization, monitoring, managing jobs; historical views of jobs; and
      • versioning, differencing, and source control
  • Workflow in AI applications is typically a long, periodical batch process; AI engineering applications often contain numerous workflows, including data cleaning and processing, model training experiment, and metric dashboard generation. Typical types of workflow pipelines include:
      • data warehousing;
      • data infrastructure maintenance;
      • model training and experimentation;
      • online production; and
      • reporting and telemetry.
  • A directed acyclic graph (DAG) defines a workflow pipeline for exploiting data to produce some desired results. Typically, users (e.g., engineers and scientists) interact with a graphical user interface (GUI) to manually compose DAGs, or must learn a special syntax to generate DAGs programmatically. Neither of these approaches is natural or intuitive, and both are prone to error. Generating, adapting, and reviewing the generation of DAGs can introduce significant overhead effort when DAGs become large and complex. Manual creation of job modules requires specifying edges to connect modules into DAGs, which is tedious, labor-intense, and error-prone. Programmatic creation of a DAGs requires writing code, such as a script in a programming language such as C#, python, or another programming language. Although this saves the tedious work of drawing the graph by hand, it still requires users to write considerable amount of extra code, upload modules, connect nodes, etc., which is overhead to the core logic of the work. Additionally, the extra code is specific to a workflow engine, which not only adds the learning cost but also prevents reuse of the time investment with other workflow engines. Despite the benefits of workflow engines, full utilization of workflow engines remains burdensome, due to steep learning curves and the effort needed to author complicated workflow pipelines.
  • Therefore, a workflow engine tool is provided that enables scientists and engineers to programmatically author workflows (e.g., DAGs) with nearly no overhead, using a simpler script that needs almost no modifications for portability among multiple different workflow engines. This permits user to focus on the business logic of the project, avoiding the distracting tedious overhead related to workflow management (such as uploading modules, drawing edges, setting parameters, and other tasks). The workflow engine tool provides an abstraction layer on top of workflow engines, introducing a binding function that converts a programming language function (e.g., a normal python function) into a workflow module definition. The workflow engine tool infers module instances and induces edge dependencies automatically by inferring from a programming language script to build a DAG.
  • The workflow engine tool induces workflow graphs from a programming script (e.g., a python script) such that module definition is inferred from a function definition (e.g., a function definition including doc strings), including inducing inputs, parameters, and outputs. Module instances (tasks and nodes) are automatically detected and created from function calls in the script, and some examples involve generating unique module IDs from upstream dependencies and a module's own parameters. Edges and execution dependencies are inferred based on variables generated from one function call and fed as inputs and parameters into other functions. The workflow engine tool thus creates an abstraction layer that serves as a bridge between user's script and a target workflow engine. By implementing different versions of the abstraction layer, to adapt to various underlying workflow engines, the script can generate DAGs on multiple run different workflow engines without burdensome changes. By providing a dummy abstraction layer, the same script can even run in the absence of a workflow engine. Additionally, the workflow engine tool automatically resolves task and data dependency, handles data persistence caching natively, permits the same script to run locally and on a workflow cluster—for multiple different workflow engines. This eliminates the need to manually create DAGs or program in special APIs.
  • FIG. 1 illustrates an arrangement 100 that can advantageously use an exemplary workflow engine tool 102. Workflow engine tool 102 includes multiple adapters 104 a-104 d that correspond to multiple different workflow engines 106 a-106 d, permitting a common script 122 in programming environment 120 to run on any of multiple different workflow engines 106 a-106 d. It should be understood that, in some examples, a different number of workflow engines can be supported. In some examples, at least one of workflow engines 106 b-106 d is a dummy abstraction layer, permitting workflow engine tool 102 to run in the absence of a workflow engine. A DAG 110 is generated for workflow engine 106 a, and includes nodes 112 a-112 c, along with edges 114 a and 114 b that connect outputs 116 a and 116 b, of nodes 112 a and 112 b respectively, with inputs 118 c and 118 d of node 112 c. As illustrated, node 112 c also has an output 116 c.
  • A workflow DAG (e.g., DAG 110) consists of nodes (e.g., nodes 112 a-112 c) and directed edges (e.g., edges 114 a and 114 b). A node (e.g., a module) is a basic unit of task execution in a workflow. It can be a function, a script, or an executable program, corresponding to a single task defined by the user. Each node has zero or more inputs, zero or more parameters, and zero or more outputs. Inputs and outputs are the data passing from one node to another. Depending on the actual workflow engine design, the data exchanged can be in multiple different forms, such as memory objects, shared files, distributed files, and others. One node's output may become the input of another node, which at least partially dictates the execution dependency between nodes. For example, since node 112 c has input 118 c, which comes from output 116 a of node 112 a, then node 112 c cannot be executed until node 112 a has been successfully executed and produced output 116 a. The dependency between nodes can be denoted as edges in the workflow DAG. A module may have multiple inputs and multiple outputs. Often, each input and output is associated with a unique name as its identifier.
  • A parameter is essential data that is needed for a particular execution run of a node, but yet is not output by another node. For example, a node for training an AI model may have parameters such as “number of epochs” or “learning rate”. Both inputs and parameters are essential data that is needed for executing a node. The difference between inputs and parameters is that inputs are data that is generated dynamically by upstream nodes during the run, and therefore dictate execution flow dependency (edges) between nodes, whereas parameters are data that is not generated by upstream nodes, but is instead but specified by users prior to the run. Therefore, parameters do not affect the dependency between nodes. Often parameters are some simple values which can be easily specified. In some situations, the user has a choice of specifying data as an input or a parameter. For example, if data is complex, then even though it can be a parameter, a function may be written to generate it, therefore rendering the data into an input. Also, in some situations, input, output, and parameters may have enforced data types, such as integer, floating point number, string, date-time, or other pre-defined data types. In such situations, when an output is connected to an input, the data type should be consistent.
  • Edges are directed (directional, pointing from an output to an input), and therefore dictate at least some aspects of execution dependency between nodes (modules and tasks). The edges in a DAG must be defined such that a DAG is acyclic, in order to permit determination of execution order. Therefore, since node 112 c has an input 118 c that comes from output 116 a of node 112 a (through edge 114 a), node 112 a cannot have an input that depends (either directly, or through intervening modules) on output 116 c of node 112 c. In some examples, there could be multiple edges between two modules, because edges connect inputs to outputs, and the modules may each have multiple inputs or outputs. However, to prevent cyclical dependencies, all edges should have the same direction, in such situations. A single output may connect to multiple inputs, when data generated by one module is consumed by multiple other modules. However, an input of a module can come from only one source.
  • An execution of a DAG pipeline represents a particular run of the DAG. Parameters must be pre-specified before the DAG can be executed, and the DAG may also require some data from external data sources, which is brought in to the DAG by some loading modules. DAG runs are deterministic, meaning that, given the same input data and parameters, and final output will be consistent for multiple execution runs. In some circumstance, a workflow engine (e.g., workflow engine 106 a) can cache intermediate results generated by successfully executed modules when the input dependency and parameters are not changed, so that during the next run, the cached results can be used in lieu of executing the module again. This can speed execution time for the DAG. For caching systems, changes to input are detected, so that the cached results are not improperly reused.
  • Programming environment 120 represents a generic programming environment, such as python or C#. An execution module 130 permits script 122, which is written in a high-level programming language, to execute. For line-by-line interpreted languages, for example, execution module 130 provides an executing virtual machine. In the illustrated example, programming environment 120 is a python environment. Script 122 is illustrated as including three parts: a workflow identifier and decorator 124, a function definition portion 126 (e.g., a function definition including doc strings), and a program logic portion 128. Python provides for a decorator, which is a function that takes another function and extends the behavior of the latter function without explicitly modifying it. In some examples, workflow identifier and decorator 124 is implemented as a “bind” function, which is a single line of line code to convert a function into a module definition that can be recognized and used in workflow engine. An example and further detail will be provided in reference to FIG. 5. Workflow identifier and decorator 124 enables workflow engine tool 102 to select a target one of adapters 104 a-104 d, so that workflow engine tool 102 can create different engine wrappers for different underlying workflow engines. This permits function definition portion 126 and program logic portion 128 to remain unchanged when any one of workflow engines 106 a-106 d is targeted. For clarity, components 142-152 will be described in relation to FIG. 5, after the descriptions of FIGS. 2 through 4.
  • FIG. 2 shows an example workflow module authoring page 200. Workflow module authoring page 200 is used for manually generating modules for DAGs and has multiple data fields for a user to complete. Workflow engine tool 102 is able to extract, infer, or otherwise generate information from script 122 that corresponds to multiple data fields in workflow module authoring page 200, thereby rendering workflow module authoring page 200 obsolete in various use case scenarios.
  • FIG. 3 shows a script portion 300 that provides function definitions for use with workflow engine tool 102, in which a user leverages workflow engine tool 102 to use normal programming language syntax to specify DAGs. Specifically, script portion 300 forms an example of function definition portion 126. The illustrated example of script portion 300 uses the following functions for an exemplary AI training and testing operation:
      • prepare_data( ), which loads a dataset, splits it into train and test portions, and returns training and testing data separately. This function takes no inputs or parameters, and generates four outputs.
      • train( ), which intakes training data (input and labels), trains a model, and returns the trained model. This function generates a trained model as an output and takes three arguments; the first two arguments are inputs and the third argument is a parameter.
      • test( ), which intakes the trained model and test data, predicts model outputs, and generates a classification report at the output.
  • The text of script portion 300 is:
  • def prepare_data( ):
    ″″″
    :return: <train_x, train_y, test_x, test_y>.
    :rtype: <npy, npy, npy, npy>.
    ″″″
    digits = datasets.load_digits( )
    n_samples = len(digits.images)
    data = digits.images.reshape((n_samples, −1))
    label = digits.target
    train_x, train_y = data[:n_samples // 2], label[:n_samples // 2]
    test_x, test_y = data[n_samples // 2:], label[n_samples // 2:]
    return train_x, train_y, test_x, test_y
    def train(train_x, train_y, gamma):
    ″″″
    :param train_x: training data input features
    :type train_x: <npy>.
    :param train_y: training data labels
    :type train_y: <npy>.
    :param gamma: SVM parameter gamma
    :type gamma: float.
    :return: <classifier>.
    :rtype: <pkl>.
    ″″″
    classifier = svm.SVC(gamma=gamma)
    classifier.fit(train_x, train_y)
    return classifier
    def test(classifier, test_x, test_y):
    ″″″
    :param classifier: trained model
    :type classifier: <pkl>.
    :param test_x: testing data input
    :type test_x: <npy>.
    :param test_y: testing data label
    :type test_y: <npy>.
    :return: <msg>.
    :rtype: <str>.
    ″″″
    expected = test_y
    predicted = classifier.predict(test_x)
    msg = ″Classification report for classifier %s:\n%s\n\n″ % (classifier,
    metrics.classification_report(expected, predicted))
    msg += ″Confusion matrix:\n%s″ % metrics.confusion_matrix(expected, predicted)
    return msg
  • The function parameters correspond to module inputs and module parameters, because they are the data needed for a function to run in a general programming environment and a module to run in a workflow environment. Function also have returns, which correspond to the module outputs. Workflow engine tool 102 frees up the users from having to manually specify inputs, parameters, and outputs of module, by extracting this data from script portion 300. For example, the function definitions include doc strings that specify: (1) which function parameters are module inputs, (2) which function parameters are the module parameters, and (3) what the function returns as the module output or outputs.
  • For example, prepare_data( ) uses
      • :return: <train_x, train_y, test_x, test_y>.
      • :rtype: <npy, npy, npy, npy>.
        to specify four outputs of type npy, named train_x, train_y, test_x, and test_y. The function train( ) uses a doc string to define inputs train_x and train_y, as the same npy data type as the train_x and train_y outputs of prepare_data( ), and also an output of type pkl, named classifier. This permits generating an edge connecting the output based on train_x of the module that corresponds to prepare_data( ) to the input based on train_x of the module that corresponds to train( ). A similar edge is generated using the inputs and outputs based on train_y for the same modules. The function test( ) uses a doc string to define inputs test_x and test_y, as the same npy data type as the test_x and test_y outputs of prepare_data( ), and an input named classifier, of the same pkl data type as the output of train( ) that uses the same name. This permits generating three edges, two going from the outputs of the module corresponding to prepare_data( ) to inputs of the module corresponding to test( ), and one going from the outputs of the module corresponding to train( ) to a third input of the module corresponding to test( ).
  • The function test( ) must execute after both prepare_data( ) and train( ), and train( ) must execute after prepare_data( ). This means that the order of execution is prepare_data( ), then train( ), and then test( ). Workflow engine tool 102 extracts this information in order to generate modules with the correct inputs, parameters, and outputs, and then generate edges to connect the modules.
  • FIG. 4 shows a script portion 400 that may be combined with script portion 300 for use with workflow engine tool 102. Specifically, script portion 400 provides program logic that advantageously uses the function definitions provided in script portion 300, and forms an example of program logic portion 128. The illustrated example of script portion 400 uses the above-described functions for the exemplary AI training and testing operation:
      • train_x, train_y, test_x, test_y=prepare_data( )
      • classifier=train(train_x, train_y, 0.001)
      • res=test(classifier, test_x, test_y).
        This example program (comprising script portions 300 and 400) loads data, separates it into training and testing portion, trains with the training data, tests with the testing data, and returns a report. At this point, with this example, execution flow can be inferred (extracted).
  • FIG. 5 illustrates intermediate results of workflow engine tool 102 operating on a script 522. In this example, a script 522 includes script portions 300 and 400 (of FIGS. 3 and 4, respectively), along with workflow identifier and decorator 124 that specifies adapter 104 a (for workflow engine 106 a) and invokes a python decorator to convert the defined functions into workflow module definitions. One example of workflow identifier and decorator 124 is:
      • If_a=LightflowEngine( )
      • prepare_data, train, test=If_a.bind_functions(prepare_data, train, test).
  • A DAG 510 has a node 512 a that corresponds to the prepare_data( ) function described above, a node 512 b that corresponds to the train( ) function described above, and a node 512 c that corresponds to the test( ) function described above. Additionally, edges are illustrated between the nodes. For simplicity of illustration, edges 514 a and 514 b are shown as single edges, although there would be two of each, to match the four values outputs of node 512 a (train_x, train_y, test_x, and test_y), which is shown as a single output 516 a, for clarity of illustration. Specifically edge 514 a connects test_x, and test_y from output 516 a to an input 518 a (there would actually be two) of node 512 c; edge 514 b connects train_x, and train_y from output 516 a to an input 518 b (there would actually be two) of node 512 b; and edge 514 c connects classifier from output 516 b of node 512 b to an input 518 c of node 512 c. This is illustrated in FIG. 6, along with a parameter 618 for node 512 b, and an output 616 from node 512 c.
  • Returning to FIG. 5, components 142-152, which perform the information extraction, and module and edge generation activities, are described. A module extraction component 142 uses the function definitions in script 522 (e.g., script portion 300) to extract information used by a module generation component 144 to generate modules (e.g., nodes 512 a-512 c). When the modules are created, they will need label, or a module ID, in order to be addressed and handled by workflow engine 106 a. One way to generate module IDs, for assigning to nodes 512 a-512 c, is to use upstream dependencies. For example, node 512 a is upstream from both of nodes 512 b and 512 c, and node 512 b is upstream from node 512 c. A module ID can then be generated for node 512 c using information from nodes 512 a and 512 c, for example by combining their module IDs and passing them through a function that is unlikely to produce collisions for module IDs. A workflow interface component 150 sends the generated modules to workflow engine 106 a. An execution flow extraction component 146 uses the input and output information to extract execution flow information, and an edge generation component 148 generates edges (e.g., edges 514 a-514 c) that connect outputs (e.g., outputs 516 a and 516 b) with inputs (e.g., inputs (518 a-518 c) of the modules generated by module generation component 144. Workflow interface component 150 sends the generated modules to workflow engine 106 a. A workflow decorator component 152 binds the functions according to the specific workflow engine targeted (e.g., workflow engine 106 a), so that DAG 510 can be generated. Workflow interface component 150 executes the DAG 510 and the underlying system calls for workflow engine 106 a.
  • In some examples, a series of function calls can be grouped into a more complex structure, for example by creating sub-graphs and using the sub-graphs in a top-level workflow pipeline. An example abbreviated script, using the previously-identified functions is:
  • def sub_dag(classifier, train_x, train_y, test_x, test_y):
    classifier = train(train_x, train_y, 0.001)
    test(classifier, test_x, test_y)
    return classifier
    train_x, train_y, test_x, test_y = prepare_data( )
    classifier = sub_dag(classifier, train_x, train_y, test_x, test_y)
  • Additionally, in some examples DAGs can be created as needed, based on conditional branching logic, such as if-then branching and for-loops—subject to the information necessary to compiling a DAG being available at the time the information is to be extracted from the script. The decorator functionality permits the use of workflow-specific functionality, identified in some parameters. For example, some workflows may support a mode specification, according to the following example modification of script portion 400 (of FIG. 4):
  • train_x, train_y, test_x, test_y = prepare_data(mode=’cpu’)
    classifier = train(train_x, train_y, 0.001, mode=’gpu’)
    res = test(classifier, test_x, test_y, mode=’gpu’)
  • When workflow engine tool 102 reads the additional parameters, it will recognize them as work-flow specific parameters, based on the original parameters of the module functions in the doc string. To enable script 522 to run anywhere with or without a workflow engine that supports the additional parameters, a dummy wrapper binds the functions, but eliminates the additional parameters and passes only the original parameters to the original function. In this way, by just changing one line, script 522 could run normally without any further changes.
  • FIG. 7 is a flowchart 700 of operations associated with operating workflow engine tool 102. As described, workflow engine tool 102 drastically simplifies workflow generation for users. Rather than manually creating modules (e.g., using a workflow module authoring page 200 for each module), and then manually connecting the modules, a user merely needs to perform operations 702 through 710, as illustrated, In operation 702, the user develops task functions, including doc string definitions, producing an equivalent of script portion 300 (of FIG. 3). In operation 704, the user writes the program logic in a generic programming language, producing an equivalent of script portion 400 (of FIG. 4). In operation 706, the user identifies the target workflow engine, and binds the functions, using workflow identifier and decorator 124, in operation 708. This permits the user to generate a workflow, merely by running the script, in operation 710.
  • FIG. 8 is a flowchart 800 of operations associated with operating workflow engine tool 102. In some examples, the following operations are implemented as computer-executable instructions stored on and executed by computing device 900. Operation 802 includes writing a programming language script in a programming environment. In some examples, the programming language includes a python programming language and the programming language script is a python script. In some examples, C# is used. Operation 804 includes a workflow engine tool receiving the programming language script. Operation 806 includes extracting module definitions from the programming language script, and operation 808 includes extracting execution flow information from the programming language script. Operation 810 then includes generating, for the workflow engine, modules from the extracted module definitions. In some examples, this includes using a binding function. Operation 812 includes generating module IDs, for modules generated for the workflow engine, based at least on upstream dependencies.
  • Operation 814 includes generating, for the workflow engine, edges for connecting the modules generated for the workflow engine, and operation 816 includes connecting the modules generated for the workflow engine with the edges generated for the workflow engine, based at least on the extracted execution flow information, to generate a workflow pipeline for the workflow engine. In some examples, the workflow pipeline comprises a DAG. Operation 818 includes recognizing a workflow-specific parameter. If the targeted workflow engine is supported, as determined by decision operation 820, then operation 822 includes passing the workflow-specific parameter to the workflow engine. Otherwise, operation 824 eliminates the workflow-specific parameter, and does not pass it. Operation 826 then includes running the workflow engine to execute the workflow pipeline.
  • If, in decision operation 828, it is determined that another, different workflow engine is to be used with the programming language script, then operation 830 includes slightly modifying the programming language script for the second workflow engine. In some examples, this is as simple as modifying a single line (e.g., workflow identifier and decorator 124 in FIGS. 1 and 5) and then flowchart 800 returns to operation 804, where the workflow engine tool receiving the slightly modified programming language script.
  • Additional Examples
  • Some aspects and examples disclosed herein are directed to a workflow engine tool comprising: a processor; a computer-readable medium storing instructions that are operative when executed by the processor to: extract module definitions from a programming language script; extract execution flow information from the programming language script; generate, for a first workflow engine, modules from the extracted module definitions; generate, for the first workflow engine, edges for connecting the modules generated for the first workflow engine; and connect the modules generated for the first workflow engine with the edges generated for the first workflow engine, based at least on the extracted execution flow information, to generate a first workflow pipeline for the first workflow engine.
  • Additional aspects and examples disclosed herein are directed to a process for workflow engine management comprising: extracting module definitions from a programming language script; extracting execution flow information from the programming language script; generating, for a first workflow engine, modules from the extracted module definitions; generating, for the first workflow engine, edges for connecting the modules generated for the first workflow engine; and connecting the modules generated for the first workflow engine with the edges generated for the first workflow engine, based at least on the extracted execution flow information, to generate a first workflow pipeline for the first workflow engine.
  • Additional aspects and examples disclosed herein are directed to one or more computer storage devices having computer-executable instructions stored thereon for workflow engine management, which, on execution by a computer, cause the computer to perform operations comprising: extracting module definitions from a programming language script; extracting execution flow information from the programming language script; generating, for a first workflow engine, modules from the extracted module definitions; generating, for the first workflow engine, edges for connecting the modules generated for the first workflow engine; connecting the modules generated for the first workflow engine with the edges generated for the first workflow engine, based at least on the extracted execution flow information, to generate a first workflow pipeline for the first workflow engine; generating, for a second workflow engine, modules from the extracted module definitions; generating, for the second workflow engine, edges for connecting the modules generated for the second workflow engine; and connecting the modules generated for the second workflow engine with the edges generated for the second workflow engine, based at least on the extracted execution flow information, to generate a second workflow pipeline for the second workflow engine.
  • Alternatively, or in addition to the other examples described herein, examples include any combination of the following:
      • the programming language includes a python programming language;
      • the first workflow pipeline comprises a DAG;
      • running the first workflow engine to execute the first workflow pipeline;
      • generating, for a second workflow engine, modules from the extracted module definitions; generating, for the second workflow engine, edges for connecting the modules generated for the second workflow engine; and connecting the modules generated for the second workflow engine with the edges generated for the second workflow engine, based at least on the extracted execution flow information, to generate a second workflow pipeline for the second workflow engine;
      • the second workflow pipeline comprises a DAG;
      • running the second workflow engine to execute the second workflow pipeline;
      • recognizing a workflow-specific parameter; and based at least on whether the first workflow engine supports the workflow-specific parameter, passing the workflow-specific parameter to the first workflow engine;
      • generating module IDs, for modules generated for the first workflow engine, based at least on upstream dependencies; and
      • generating module IDs, for modules generated for the second workflow engine, based at least on upstream dependencies.
  • While the aspects of the disclosure have been described in terms of various examples with their associated operations, a person skilled in the art would appreciate that a combination of operations from any number of different examples is also within scope of the aspects of the disclosure.
  • Example Operating Environment
  • FIG. 9 is a block diagram of example computing device 900 for implementing aspects disclosed herein, and is designated generally as computing device 900. Computing device 900 is but one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the examples disclosed herein. Neither should computing device 900 be interpreted as having any dependency or requirement relating to any one or combination of components/modules illustrated. The examples disclosed herein may be described in the general context of computer code or machine-useable instructions, including computer-executable instructions such as program components, being executed by a computer or other machine, such as a personal data assistant or other handheld device. Generally, program components including routines, programs, objects, components, data structures, and the like, refer to code that performs particular tasks, or implement particular abstract data types. The disclosed examples may be practiced in a variety of system configurations, including personal computers, laptops, smart phones, mobile tablets, hand-held devices, consumer electronics, specialty computing devices, etc. The disclosed examples may also be practiced in distributed computing environments when tasks are performed by remote-processing devices that are linked through a communications network.
  • Computing device 900 includes a bus 910 that directly or indirectly couples the following devices and components: computer-storage memory 912, one or more processors 914, one or more presentation components 916, input/output (I/O) ports 918, I/O components 920, a power supply 922, and a network component 924. Computing device 900 should not be interpreted as having any dependency or requirement related to any single component or combination of components illustrated therein. While computing device 900 is depicted as a seemingly single device, multiple computing devices 900 may work together and share the depicted device resources. For instance, memory 912 may be distributed across multiple devices, processor(s) 914 may provide housed on different devices, and so on.
  • Bus 910 represents what may be one or more busses (such as an address bus, data bus, or a combination thereof). Although the various blocks of FIG. 9 are shown with lines for the sake of clarity, in reality, delineating various components is not so clear, and metaphorically, the lines would more accurately be grey and fuzzy. For example, one may consider a presentation component such as a display device to be an I/O component. Also, processors have memory. Such is the nature of the art, and reiterate that the diagram of FIG. 9 is merely illustrative of an exemplary computing device that can be used in connection with one or more disclosed examples. Distinction is not made between such categories as “workstation,” “server,” “laptop,” “hand-held device,” etc., as all are contemplated within the scope of FIG. 9 and the references herein to a “computing device.” Memory 912 may take the form of the computer-storage media references below and operatively provide storage of computer-readable instructions, data structures, program modules and other data for computing device 900. For example, memory 912 may store an operating system, a universal application platform, or other program modules and program data. Memory 912 may be used to store and access instructions configured to carry out the various operations disclosed herein.
  • As mentioned below, memory 912 may include computer-storage media in the form of volatile and/or nonvolatile memory, removable or non-removable memory, data disks in virtual environments, or a combination thereof. Memory 912 may include any quantity of memory associated with or accessible by computing device 900. Memory 912 may be internal to computing device 900 (as shown in FIG. 9), external to computing device 900 (not shown), or both (not shown). Examples of memory 912 in include, without limitation, random access memory (RAM); read only memory (ROM); electronically erasable programmable read only memory (EEPROM); flash memory or other memory technologies; CD-ROM, digital versatile disks (DVDs) or other optical or holographic media; magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices; memory wired into an analog computing device; or any other medium for encoding desired information and for access by computing device 900. Additionally, or alternatively, memory 912 may be distributed across multiple computing devices 900, e.g., in a virtualized environment in which instruction processing is carried out on multiple computing devices 900. For the purposes of this disclosure, “computer storage media,” “computer-storage memory,” “memory,” and “memory devices” are synonymous terms for memory 912, and none of these terms include carrier waves or propagating signaling.
  • Processor(s) 914 may include any quantity of processing units that read data from various entities, such as memory 912 or I/O components 920. Specifically, processor(s) 914 are programmed to execute computer-executable instructions for implementing aspects of the disclosure. The instructions may be performed by the processor, by multiple processors within computing device 900, or by a processor external to computing device 900. In some examples, processor(s) 914 are programmed to execute instructions such as those illustrated in the flowcharts discussed below and depicted in the accompanying drawings. Moreover, in some examples, processor(s) 914 represent an implementation of analog techniques to perform the operations described herein. For example, the operations may be performed by an analog client computing device 900 and/or a digital client computing device 900. Presentation component(s) 916 present data indications to a user or other device. Exemplary presentation components include a display device, speaker, printing component, vibrating component, etc. One skilled in the art will understand and appreciate that computer data may be presented in a number of ways, such as visually in a graphical user interface (GUI), audibly through speakers, wirelessly between computing devices 900, across a wired connection, or in other ways. Ports 918 allow computing device 900 to be logically coupled to other devices including I/O components 920, some of which may be built in. Example I/O components 920 include, for example but without limitation, a microphone, joystick, game pad, satellite dish, scanner, printer, wireless device, etc.
  • Computing device 900 may operate in a networked environment via the network component 924 using logical connections to one or more remote computers. In some examples, the network component 924 includes a network interface card and/or computer-executable instructions (e.g., a driver) for operating the network interface card. Communication between the computing device 900 and other devices may occur using any protocol or mechanism over any wired or wireless connection. In some examples, network component 924 is operable to communicate data over public, private, or hybrid (public and private) using a transfer protocol, between devices wirelessly using short range communication technologies (e.g., near-field communication (NFC), Bluetooth™ branded communications, or the like), or a combination thereof. For example, network component 924 communicates over communication link 930 with network 932 with a cloud resource 934. Various examples of communication link 930 include a wireless connection, a wired connection, and/or a dedicated link, and in some examples, at least a portion is routed through the internet. In some examples, cloud resource 934 performs at least some of the operations described herein for computing device 900.
  • Although described in connection with an example computing device 900, examples of the disclosure are capable of implementation with numerous other general-purpose or special-purpose computing system environments, configurations, or devices. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with aspects of the disclosure include, but are not limited to, smart phones, mobile tablets, mobile computing devices, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, gaming consoles, microprocessor-based systems, set top boxes, programmable consumer electronics, mobile telephones, mobile computing and/or communication devices in wearable or accessory form factors (e.g., watches, glasses, headsets, or earphones), network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, virtual reality (VR) devices, holographic devices, and the like. Such systems or devices may accept input from the user in any way, including from input devices such as a keyboard or pointing device, via gesture input, proximity input (such as by hovering), and/or via voice input.
  • Examples of the disclosure may be described in the general context of computer-executable instructions, such as program modules, executed by one or more computers or other devices in software, firmware, hardware, or a combination thereof. The computer-executable instructions may be organized into one or more computer-executable components or modules. Generally, program modules include, but are not limited to, routines, programs, objects, components, and data structures that perform particular tasks or implement particular abstract data types. Aspects of the disclosure may be implemented with any number and organization of such components or modules. For example, aspects of the disclosure are not limited to the specific computer-executable instructions or the specific components or modules illustrated in the figures and described herein. Other examples of the disclosure may include different computer-executable instructions or components having more or less functionality than illustrated and described herein. In examples involving a general-purpose computer, aspects of the disclosure transform the general-purpose computer into a special-purpose computing device when configured to execute the instructions described herein.
  • By way of example and not limitation, computer readable media comprise computer storage media and communication media. Computer storage media include volatile and nonvolatile, removable and non-removable memory implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules, or the like. Computer storage media are tangible and mutually exclusive to communication media. Computer storage media are implemented in hardware and exclude carrier waves and propagated signals. Computer storage media for purposes of this disclosure are not signals per se. Exemplary computer storage media include hard disks, flash drives, solid-state memory, phase change random-access memory (PRAM), static random-access memory (SRAM), dynamic random-access memory (DRAM), other types of random-access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, compact disk read-only memory (CD-ROM), digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information for access by a computing device. In contrast, communication media typically embody computer readable instructions, data structures, program modules, or the like in a modulated data signal such as a carrier wave or other transport mechanism and include any information delivery media.
  • The order of execution or performance of the operations in examples of the disclosure illustrated and described herein is not essential, and may be performed in different sequential manners in various examples. For example, it is contemplated that executing or performing a particular operation before, contemporaneously with, or after another operation is within the scope of aspects of the disclosure. When introducing elements of aspects of the disclosure or the examples thereof, the articles “a,” “an,” “the,” and “said” are intended to mean that there are one or more of the elements. The terms “comprising,” “including,” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements. The term “exemplary” is intended to mean “an example of.” The phrase “one or more of the following: A, B, and C” means “at least one of A and/or at least one of B and/or at least one of C.”
  • Having described aspects of the disclosure in detail, it will be apparent that modifications and variations are possible without departing from the scope of aspects of the disclosure as defined in the appended claims. As various changes could be made in the above constructions, products, and methods without departing from the scope of aspects of the disclosure, it is intended that all matter contained in the above description and shown in the accompanying drawings shall be interpreted as illustrative and not in a limiting sense.

Claims (20)

What is claimed is:
1. A workflow engine tool comprising:
a processor; and
a computer-readable medium storing instructions that are operative when executed by the processor to:
extract module definitions from a programming language script;
extract execution flow information from the programming language script;
generate, for a first workflow engine, modules from the extracted module definitions;
generate, for the first workflow engine, edges for connecting the modules generated for the first workflow engine; and
connect the modules generated for the first workflow engine with the edges generated for the first workflow engine, based at least on the extracted execution flow information, to generate a first workflow pipeline for the first workflow engine.
2. The tool of claim 1 wherein the programming language includes a python programming language.
3. The tool of claim 1 wherein the first workflow pipeline comprises a directed acyclic graph (DAG).
4. The tool of claim 1 wherein the instructions are further operative to:
run the first workflow engine to execute the first workflow pipeline.
5. The tool of claim 1 wherein the instructions are further operative to:
generate, for a second workflow engine, modules from the extracted module definitions;
generate, for the second workflow engine, edges for connecting the modules generated for the second workflow engine; and
connect the modules generated for the second workflow engine with the edges generated for the second workflow engine, based at least on the extracted execution flow information, to generate a second workflow pipeline for the second workflow engine.
6. The tool of claim 1 wherein the instructions are further operative to:
recognize a workflow-specific parameter; and
based at least on whether the first workflow engine supports the workflow-specific parameter, pass the workflow-specific parameter to the first workflow engine.
7. The tool of claim 1 wherein the instructions are further operative to:
generate module IDs, for modules generated for the first workflow engine, based at least on upstream dependencies.
8. A method of workflow engine management, the method comprising:
extracting module definitions from a programming language script;
extracting execution flow information from the programming language script;
generating, for a first workflow engine, modules from the extracted module definitions;
generating, for the first workflow engine, edges for connecting the modules generated for the first workflow engine; and
connecting the modules generated for the first workflow engine with the edges generated for the first workflow engine, based at least on the extracted execution flow information, to generate a first workflow pipeline for the first workflow engine.
9. The method of claim 8 wherein the programming language includes a python programming language.
10. The method of claim 8 wherein the first workflow pipeline comprises a directed acyclic graph (DAG).
11. The method of claim 8 further comprising:
running the first workflow engine to execute the first workflow pipeline.
12. The method of claim 8 further comprising:
generating, for a second workflow engine, modules from the extracted module definitions;
generating, for the second workflow engine, edges for connecting the modules generated for the second workflow engine; and
connecting the modules generated for the second workflow engine with the edges generated for the second workflow engine, based at least on the extracted execution flow information, to generate a second workflow pipeline for the second workflow engine.
13. The method of claim 8 further comprising:
recognizing a workflow-specific parameter; and
based at least on whether the first workflow engine supports the workflow-specific parameter, passing the workflow-specific parameter to the first workflow engine.
14. The method of claim 8 further comprising:
generating module IDs, for modules generated for the first workflow engine, based at least on upstream dependencies.
15. One or more computer storage devices having computer-executable instructions stored thereon for workflow engine management, which, on execution by a computer, cause the computer to perform operations comprising:
extracting module definitions from a programming language script;
extracting execution flow information from the programming language script;
generating, for a first workflow engine, modules from the extracted module definitions;
generating, for the first workflow engine, edges for connecting the modules generated for the first workflow engine;
connecting the modules generated for the first workflow engine with the edges generated for the first workflow engine, based at least on the extracted execution flow information, to generate a first workflow pipeline for the first workflow engine;
generating, for a second workflow engine, modules from the extracted module definitions;
generating, for the second workflow engine, edges for connecting the modules generated for the second workflow engine; and
connecting the modules generated for the second workflow engine with the edges generated for the second workflow engine, based at least on the extracted execution flow information, to generate a second workflow pipeline for the second workflow engine.
16. The one or more computer storage devices of claim 15 wherein the programming language includes a python programming language.
17. The one or more computer storage devices of claim 15 wherein the first and second workflow pipelines each comprises a directed acyclic graphs (DAG).
18. The one or more computer storage devices of claim 15 wherein the operations further comprise:
running the first workflow engine to execute the first workflow pipeline; and
running the second workflow engine to execute the second workflow pipeline.
19. The one or more computer storage devices of claim 15 wherein the operations further comprise:
recognizing a workflow-specific parameter; and
based at least on whether the first workflow engine supports the workflow-specific parameter, passing the workflow-specific parameter to the first workflow engine.
20. The one or more computer storage devices of claim 15 wherein the operations further comprise:
generating module IDs, for modules generated for the first and second workflow engines, based at least on upstream dependencies.
US16/285,180 2019-02-25 2019-02-25 Workflow engine tool Active 2039-02-26 US10768908B1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US16/285,180 US10768908B1 (en) 2019-02-25 2019-02-25 Workflow engine tool
CN202080016434.4A CN113826070A (en) 2019-02-25 2020-01-23 Workflow engine tool
EP20705877.7A EP3931684B1 (en) 2019-02-25 2020-01-23 Workflow engine tool
PCT/US2020/014688 WO2020176177A1 (en) 2019-02-25 2020-01-23 Workflow engine tool
US16/945,321 US11327726B2 (en) 2019-02-25 2020-07-31 Workflow engine tool

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US16/285,180 US10768908B1 (en) 2019-02-25 2019-02-25 Workflow engine tool

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/945,321 Continuation US11327726B2 (en) 2019-02-25 2020-07-31 Workflow engine tool

Publications (2)

Publication Number Publication Date
US20200272433A1 true US20200272433A1 (en) 2020-08-27
US10768908B1 US10768908B1 (en) 2020-09-08

Family

ID=69593816

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/285,180 Active 2039-02-26 US10768908B1 (en) 2019-02-25 2019-02-25 Workflow engine tool
US16/945,321 Active US11327726B2 (en) 2019-02-25 2020-07-31 Workflow engine tool

Family Applications After (1)

Application Number Title Priority Date Filing Date
US16/945,321 Active US11327726B2 (en) 2019-02-25 2020-07-31 Workflow engine tool

Country Status (4)

Country Link
US (2) US10768908B1 (en)
EP (1) EP3931684B1 (en)
CN (1) CN113826070A (en)
WO (1) WO2020176177A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113709229A (en) * 2021-08-24 2021-11-26 德清阿尔法创新研究院 Data-driven intelligent Internet of things platform workflow implementation system and method
US20220207438A1 (en) * 2020-12-30 2022-06-30 International Business Machines Corporation Automatic creation and execution of a test harness for workflows
WO2022115706A3 (en) * 2020-11-30 2022-07-21 Amazon Technologies, Inc. Data preparation for use with machine learning

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11307852B1 (en) * 2021-10-29 2022-04-19 Snowflake Inc. Automated generation of dependency graph based on input and output requirements of information
US20230419215A1 (en) * 2022-06-23 2023-12-28 Microsoft Technology Licensing, Llc Dynamic next operation determination for workflows
CN115185502B (en) * 2022-09-14 2022-11-15 中国人民解放军国防科技大学 Rule-based data processing workflow definition method, device, terminal and medium
CN117234480B (en) * 2023-11-13 2024-01-23 中国医学科学院医学信息研究所 Ontology-based multi-programming language component specification and workflow system and use method

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8060391B2 (en) * 2006-04-07 2011-11-15 The University Of Utah Research Foundation Analogy based workflow identification
US7613848B2 (en) * 2006-06-13 2009-11-03 International Business Machines Corporation Dynamic stabilization for a stream processing system
WO2010045143A2 (en) * 2008-10-16 2010-04-22 The University Of Utah Research Foundation Automated development of data processing results
US8656346B2 (en) 2009-02-18 2014-02-18 Microsoft Corporation Converting command units into workflow activities
US20110145518A1 (en) * 2009-12-10 2011-06-16 Sap Ag Systems and methods for using pre-computed parameters to execute processes represented by workflow models
US20120078678A1 (en) * 2010-09-23 2012-03-29 Infosys Technologies Limited Method and system for estimation and analysis of operational parameters in workflow processes
US9916133B2 (en) * 2013-03-14 2018-03-13 Microsoft Technology Licensing, Llc Software release workflow management
US9430452B2 (en) * 2013-06-06 2016-08-30 Microsoft Technology Licensing, Llc Memory model for a layout engine and scripting engine
CN105556504A (en) * 2013-06-24 2016-05-04 惠普发展公司,有限责任合伙企业 Generating a logical representation from a physical flow
US9740505B2 (en) * 2014-07-15 2017-08-22 The Mathworks, Inc. Accurate static dependency analysis via execution-context type prediction
US9952899B2 (en) 2014-10-09 2018-04-24 Google Llc Automatically generating execution sequences for workflows
WO2016113663A1 (en) * 2015-01-18 2016-07-21 Checkmarx Ltd. Rasp for scripting languages
US9690555B2 (en) * 2015-06-29 2017-06-27 International Business Machines Corporation Optimization of application workflow in mobile embedded devices
US10007513B2 (en) * 2015-08-27 2018-06-26 FogHorn Systems, Inc. Edge intelligence platform, and internet of things sensor streams system
US9699205B2 (en) * 2015-08-31 2017-07-04 Splunk Inc. Network security system
US20170091673A1 (en) * 2015-09-29 2017-03-30 Skytree, Inc. Exporting a Transformation Chain Including Endpoint of Model for Prediction
US10423393B2 (en) * 2016-04-28 2019-09-24 Microsoft Technology Licensing, Llc Intelligent flow designer
US10459979B2 (en) * 2016-06-30 2019-10-29 Facebook, Inc. Graphically managing data classification workflows in a social networking system with directed graphs
US10672156B2 (en) * 2016-08-19 2020-06-02 Seven Bridges Genomics Inc. Systems and methods for processing computational workflows
US10387126B2 (en) * 2017-06-30 2019-08-20 Microsoft Technology Licensing, Llc Data marshalling optimization via intermediate representation of workflows
US11151465B2 (en) * 2017-12-22 2021-10-19 International Business Machines Corporation Analytics framework for selection and execution of analytics in a distributed environment
US11334806B2 (en) * 2017-12-22 2022-05-17 International Business Machines Corporation Registration, composition, and execution of analytics in a distributed environment
US10621013B2 (en) * 2018-06-29 2020-04-14 Optum, Inc. Automated systems and methods for generating executable workflows

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022115706A3 (en) * 2020-11-30 2022-07-21 Amazon Technologies, Inc. Data preparation for use with machine learning
US20220207438A1 (en) * 2020-12-30 2022-06-30 International Business Machines Corporation Automatic creation and execution of a test harness for workflows
CN113709229A (en) * 2021-08-24 2021-11-26 德清阿尔法创新研究院 Data-driven intelligent Internet of things platform workflow implementation system and method

Also Published As

Publication number Publication date
US20210224047A1 (en) 2021-07-22
EP3931684A1 (en) 2022-01-05
US11327726B2 (en) 2022-05-10
EP3931684B1 (en) 2023-08-09
CN113826070A (en) 2021-12-21
US10768908B1 (en) 2020-09-08
WO2020176177A1 (en) 2020-09-03

Similar Documents

Publication Publication Date Title
US11327726B2 (en) Workflow engine tool
US11113475B2 (en) Chatbot generator platform
US20200090052A1 (en) Decision tables and enterprise rules for object linking within an application platform as a service environment
US20160012350A1 (en) Interoperable machine learning platform
US11288055B2 (en) Model-based differencing to selectively generate and deploy images in a target computing environment
US9448851B2 (en) Smarter big data processing using collaborative map reduce frameworks
CN110427182A (en) A kind of template type construction APP method and device
WO2023116067A1 (en) Power service decomposition method and system for 5g cloud-edge-end collaboration
US11853749B2 (en) Managing container images in groups
CN111406256A (en) Coordination engine blueprint aspects of hybrid cloud composition
US11409564B2 (en) Resource allocation for tuning hyperparameters of large-scale deep learning workloads
CN112486807A (en) Pressure testing method and device, electronic equipment and readable storage medium
US11797272B2 (en) Systems and methods utilizing machine learning driven rules engine for dynamic data-driven enterprise application
CN114265595B (en) Cloud native application development and deployment system and method based on intelligent contracts
CN111406383A (en) Coordination engine blueprint aspects of hybrid cloud composition
Raj et al. Building Microservices with Docker Compose
Ramisetty et al. Ontology integration for advanced manufacturing collaboration in cloud platforms
Stanev et al. Why the standard methods, 5GL, common platforms and reusable components are the four pillars of the new computational paradigm Programming without programmers
US20230385039A1 (en) Code generation tool for cloud-native high-performance computing
US11941374B2 (en) Machine learning driven rules engine for dynamic data-driven enterprise application
US20240020593A1 (en) User interface presenting integrated enterprise services
US20210209531A1 (en) Requirement creation using self learning mechanism
US20230021412A1 (en) Techniques for implementing container-based software services
US11321073B1 (en) Utilizing models for replacing existing enterprise software with new enterprise software
US20200356364A1 (en) Project adoption documentation generation using machine learning

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WANG, YU;HU, YU;CAO, HAIYUAN;AND OTHERS;SIGNING DATES FROM 20190304 TO 20190311;REEL/FRAME:056811/0908

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4