CN117492952A - Workflow method, system and device based on big data - Google Patents

Workflow method, system and device based on big data Download PDF

Info

Publication number
CN117492952A
CN117492952A CN202311453854.0A CN202311453854A CN117492952A CN 117492952 A CN117492952 A CN 117492952A CN 202311453854 A CN202311453854 A CN 202311453854A CN 117492952 A CN117492952 A CN 117492952A
Authority
CN
China
Prior art keywords
node
workflow
data
big data
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311453854.0A
Other languages
Chinese (zh)
Inventor
宋泉河
郭志超
陈盼盼
王新根
杨运平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Bangsheng Technology Co ltd
Original Assignee
Zhejiang Bangsheng Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Bangsheng Technology Co ltd filed Critical Zhejiang Bangsheng Technology Co ltd
Priority to CN202311453854.0A priority Critical patent/CN117492952A/en
Publication of CN117492952A publication Critical patent/CN117492952A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a workflow construction method, a system and a device based on big data, wherein based on a data bridging component, workflow node data is accessed into a big data platform, so that the workflow and the nodes execute decoupling, and node data calculation is operated in the big data platform; the big data platform generates a corresponding path according to each node in the workflow, and is used for storing an intermediate data result obtained after node calculation and execution, and the input of a subsequent node only needs to read the output path of the last node; each node is independently packaged into a task of a corresponding workflow, the scheduling execution of the task of the self-defined node is realized, and after the task is successfully executed, an execution result is stored in a distributed file storage system hdfs for the use of a subsequent node. According to the invention, the big data platform and the big data bridging component are introduced to support the big data scene, the node custom scheduling is supported, the node expandability is strong, and the newly added node is convenient to expand, so that the method is suitable for the complex workflow scene.

Description

Workflow method, system and device based on big data
Technical Field
The present invention relates to the field of workflow modeling, and in particular, to a workflow method, system and device based on big data.
Background
WorkFlow (WorkFlow) refers to the whole flow of dispersing work into task sets, and generating feedback from configuration of tasks, trigger conditions, execution sequence, information transfer, operation monitoring, and output of results. The method can abstract and describe the business among the operation steps. In short, the workflow can link a series of related tasks, and the tasks are automatically executed, so that information feedback is intuitively obtained; DAG (Directed acyclic graph) is a loop-free directed graph, also known as a directed acyclic graph, which can implement dependency management between nodes. Aiming at the characteristics of popular and easily understood WorkFlow and DAG and wide spread, the advantages are fused in the design concept of the invention, and meanwhile, some unique characteristics are designed and realized, so that the convenience of the research and development process is improved, and the expandable capacity of WorkFlow is enhanced.
The conventional workflow implementation has the problems of low flow multiplexing degree, low expandability, low data compactness and the like. In the data processing flow of data analysis modeling, as shown in fig. 1, spark SQL nodes need to do relevant data preprocessing to extract features, the flow is limited to be not executed but an upstream data table structure is unclear, at this time, table structure information needs to be manually obtained for the Spark SQL nodes, and when the complexity of the modeling flow is high, the labor cost is greatly increased; in addition, in the data analysis modeling process, the stages of data preprocessing, feature engineering, model training, model generation, model prediction and the like are needed, the exploration process of a complex model is long, the working of each stage is quite different, but the data transfer relations among the stages are closely connected, so that a great deal of time is needed to be consumed to adapt to the data transmission among the stages in an actual business scene, and even the stage data before the model generation is covered or lost can cause the whole process to restart.
The following disadvantages exist in the prior art:
1. the type and the data volume of the butt joint data between the workflow nodes are not supported enough, and the current workflow is usually operated on a server of the current workflow when the workflow nodes are executed, so that massive data cannot be processed due to the limitation of server resources of the current workflow when large-scale data are processed, the data sources are limited when the data are processed, the data sources are required to be manually converted by hands, the operation is time-consuming and labor-consuming, the conversion error is easy to cause, and the result generation is influenced.
2. The current flow engine has insufficient flexibility and weak expandability, and newly added flow nodes are difficult to overcome the difficulty of complex workflow. For the flow with high individuation degree, quick response cannot be achieved, and the workflow needs to be expanded, but because the expandable capacity is not strong, a large number of flow engine codes need to be modified, and a new flow is adapted. Not only is time and labor wasted, but also maintainability of the flow engine is reduced.
3. The execution scheduling of the workflow is not flexible enough, the current workflow execution scheduling only supports the scheduling of a flow stage, the nodes included in the flow need to be all executed, sometimes only a single node or a section of node needs to be executed, and the execution of some useless nodes wastes resources and greatly influences the development efficiency of the flow. This requires a more flexible, finer grained node scheduling scheme to support scheduling execution at the flow node level.
Disclosure of Invention
The invention aims at overcoming the defects of the prior art and provides a workflow construction method, system and device based on big data.
The aim of the invention is realized by the following technical scheme: in a first aspect, the present invention provides a workflow construction method based on big data, the method comprising the steps of:
(1) Business personnel construct a business process of an initial workflow based on visual modeling;
(2) Based on the data bridging component, the node data of the workflow is accessed to the big data platform, so that the workflow and the node execute decoupling, and the node data calculation is operated in the big data platform;
(3) The large data platform generates a corresponding path according to each node in the workflow, is used for storing intermediate data results obtained after node calculation and execution, and stores the intermediate data results in a distributed file storage system hdfs, so that data transmission among nodes is realized, and the input of a subsequent node only needs to read the output path of the last node;
(4) Each node is independently packaged into a task of a corresponding workflow, the scheduling execution of the task of the self-defined node is realized, the interaction is carried out through the data bridging component and the big data platform, and the task state of the workflow is monitored;
(5) After the task is successfully executed, the execution result is stored in the distributed file storage system hdfs for the subsequent nodes to use.
Further, when the workflow newly adds the custom component node, the definition JSON corresponding to the newly added custom component node is generated according to the required custom component configuration parameters, the definition JSON of the component is rendered through the front end, and the logic required to be executed by the newly added custom component node is defined through the rear end, so that a new node is obtained by expansion.
Further, the JSON contains basic information of the custom component, including component name, component classification, component icon, component port information, corresponding back-end class information, and component attribute information, including attribute name and type information.
Further, the data bridging component is a middleware for interaction between a workflow and a big data platform, and implementing decoupling between the workflow and specific node execution, including: submitting a task, monitoring the state of the task, acquiring the execution information of the task and the execution result information, and returning to the flow engine.
Further, the nodes of the workflow include data extraction class nodes, data preprocessing class nodes, feature extraction class nodes and machine learning model class nodes.
Further, in step (5), when the subsequent node executes, the pre-node is not required to be executed any more, the temporary data can be directly read from the big data platform to execute the node, and various self-defined node executions can be realized, including running the node, starting to run from the node, and all running.
Further, intermediate temporary data in the distributed file storage system hdfs during the workflow running process can be presented and exported.
In a second aspect, the invention also provides a workflow construction system based on big data, which comprises a workflow construction module, a data bridging module, a big data platform module, a task scheduling module and a result storage module;
the workflow construction module is used for constructing a business flow of an initial workflow through visual modeling;
the data bridging module is used for accessing the node data of the workflow into the big data platform based on the data bridging component, so that the workflow and the node execute decoupling, and the node data calculation is operated in the big data platform;
the big data platform module is used for generating a corresponding path according to each node in the workflow and storing an intermediate data result obtained after node calculation and execution, so that data transmission among nodes is realized, and the input of a subsequent node only needs to read the output path of the last node;
the task scheduling module is used for independently packaging each node into a task of a corresponding workflow, realizing the scheduling execution of the task of the self-defined node, interacting with a big data platform through the data bridging component and monitoring the task state of the workflow;
and the result storage module is used for storing the execution result into the distributed file storage system hdfs for the use of the subsequent nodes after the task execution is successful.
In a third aspect, the present invention further provides a workflow construction device based on big data, including a memory and one or more processors, where the memory stores executable codes, and when the processor executes the executable codes, the processor implements the workflow construction method based on big data.
In a fourth aspect, the present invention further provides a computer readable storage medium having a program stored thereon, which when executed by a processor, implements the workflow construction method based on big data.
The invention has the beneficial effects that:
1. by introducing a big data platform and a big data bridging component, a big data scene is supported.
2. And supporting the presentation and export of intermediate temporary data in the workflow running process.
3. The self-defined scheduling of the node is supported, and the self-defined scheduling comprises the steps of running the node, starting to run from the node, running all the nodes and the like.
4. The method integrates the components required in the modeling process, wherein the components cover the processes of data sources, data preprocessing, feature engineering, machine learning, tools and the like, and the integrated components are rich.
5. The node expandability is strong, and newly added nodes are convenient to expand so as to adapt to complex workflow scenes.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a conventional data processing modeling flow.
FIG. 2 is a schematic diagram of a data processing modeling flow of the present invention.
FIG. 3 is a flow chart of a newly added custom node.
Fig. 4 is a schematic diagram illustrating the execution process of the new flow.
Fig. 5 is a schematic diagram of a workflow construction system based on big data according to the present invention.
Fig. 6 is a block diagram of a workflow construction apparatus based on big data according to the present invention.
Detailed Description
The following describes the embodiments of the present invention in further detail with reference to the drawings.
As shown in FIG. 2, the workflow construction method based on big data provided by the invention is mainly applied to machine learning modeling, a distributed scheduling system, a big data processing platform and other scenes. The method is currently applied to an intelligent learning platform and used for machine learning modeling, big data processing and the like. The method comprises the following steps:
(1) Business personnel construct a business process of an initial workflow based on visual modeling; the nodes of the workflow are the minimum granularity of the composition flow, and comprise data extraction class nodes, data preprocessing class nodes, feature extraction class nodes and machine learning model class nodes.
(2) Based on the data bridging component, the workflow node data is accessed into a big data platform, and the service system is only responsible for management of the workflow, so that the workflow and the nodes execute decoupling, the node data calculation is operated in the big data platform, and the problem of insufficient support of the current workflow on big data scenes is solved; the data bridging component is a middleware for interaction between a workflow and a big data platform, and realizes decoupling between the workflow and specific node execution, and comprises the following components: submitting a task, monitoring the state of the task, acquiring the execution information of the task and the execution result information, and returning to the flow engine.
(3) The big data platform generates a corresponding path according to each node in the workflow, and is used for storing an intermediate data result obtained after node calculation execution into a distributed file storage system hdfs, and directly reading existing data when supporting workflow operation, so that data transmission among nodes is realized, and the input of a subsequent node only needs to read the output path of the last node; and the execution of the front node is not required to be waited, so that all intermediate data in the process of executing the flow are saved, the granularity of scheduling of the flow is finer and node scheduling is more flexible through the intermediate data, and the node-level task scheduling is supported.
The core goal of the big data platform is to realize the efficient utilization and deep mining of large-scale data so as to find the rules and the values in the data and provide intelligent decision support for enterprises and organizations. Typical big data platforms are Hadoop, spark, etc.
(4) Each node is independently packaged into a task of a corresponding workflow, and is scheduled by a scheduling center, so that the scheduling execution of the task of the self-defined node is realized, and the scheduling center interacts with a big data platform through a data bridging component and monitors the task state of the workflow;
(5) After the task is successfully executed, the task execution result is stored into the distributed file storage system hdfs in a parquet file for the use of the subsequent nodes. When the subsequent node executes, the front node is not required to be executed, the temporary data can be directly read from the big data platform to execute the node, and various self-defined node executions can be realized, including the operation of the node, the operation to the node, the operation from the node, all the operations and the like, the development efficiency of the process is improved, and the process scheduling is more flexible. Intermediate temporary data in the hdfs of the distributed file storage system during the workflow running can be presented and exported. Due to the scheduled execution of tasks, large data platforms are put into place. The invention can obviously improve the capability of processing big data. The size of the processed data can be enhanced by expanding the big data platform.
As shown in FIG. 3, aiming at the problems that the flexibility of the current flow engine is insufficient, the expandability is not strong, the newly added flow nodes are difficult, and the newly added flow nodes are difficult to be qualified for complex workflow, the invention can newly add the custom component nodes and has higher flexibility and high expansibility. When the workflow newly adds the custom component node, the definition JSON corresponding to the newly added custom component node is generated according to the required custom component configuration parameters, the definition JSON of the component is rendered through the front end, at the moment, the front end can already use the custom node, the logic required to be executed by the newly added custom component node is defined through the rear end, a new node is obtained through expansion, and the old function cannot be influenced. If a Spark SQL node is newly added, the new node needs to inherit the abstract class of the DefaultNode, the new node integration only needs to rewrite the doCheck () method to complete the verification of each attribute in the node, the attributes of the components are different, the realization logic of different components is also different, if SQL sentences are conventionally detected, whether the grammar is correct is judged; the prepreCmpParam () method is used for acquiring parameters of Spark SQL component execution, and then is submitted to the Spark SQL component program for operation; because Spark SQL components belong to Spark programs, the runtime configuration required by Spark operation can be obtained through a prepreCmptConfig () method, and a Spark SQL node at the back end can be realized by only rewriting the method.
The JSON contains basic information of the custom component, including component name, component classification, component icon, component port information, corresponding back-end class information and component attribute information, and the component attribute information includes attribute name and type information.
As shown in fig. 4, the invention can also realize a new flow based on the construction of the workflow, and the process is consistent with the basic process of the construction of the workflow, wherein the difference is that the implementation modes in the steps are different, including the implementation logic and standard of the flow modeling, the technical scheme of the specific implementation of the visual modeling, the task scheduling execution of the flow nodes and the like; because the flow is newly added on the basis of the constructed workflow flow, the flow can be directly scheduled after visual modeling, then specific tasks are executed on a big data platform, flow test can be carried out after task results are obtained, and finally, the online of the flow is realized.
As shown in fig. 5, in another aspect, the present invention further provides a workflow construction system based on big data, where the system includes a workflow construction module, a data bridging module, a big data platform module, a task scheduling module, and a result storage module;
the workflow construction module is used for constructing a business flow of an initial workflow through visual modeling;
the data bridging module is used for accessing the node data of the workflow into the big data platform based on the data bridging component, so that the workflow and the node execute decoupling, and the node data calculation is operated in the big data platform;
the big data platform module is used for generating a corresponding path according to each node in the workflow and storing an intermediate data result obtained after node calculation and execution, so that data transmission among nodes is realized, and the input of a subsequent node only needs to read the output path of the last node;
the task scheduling module is used for independently packaging each node into a task of a corresponding workflow, realizing the scheduling execution of the task of the self-defined node, interacting with a big data platform through the data bridging component and monitoring the task state of the workflow;
and the result storage module is used for storing the execution result into the distributed file storage system hdfs for the use of the subsequent nodes after the task execution is successful.
Corresponding to the embodiment of the workflow construction method based on big data, the invention also provides an embodiment of the workflow construction device based on big data.
Referring to fig. 6, a workflow construction apparatus based on big data according to an embodiment of the present invention includes a memory and one or more processors, where the memory stores executable codes, and the processors are configured to implement a workflow construction method based on big data according to the above embodiment when executing the executable codes.
The embodiment of the workflow construction device based on big data can be applied to any device with data processing capability, and the device with data processing capability can be a device or a device such as a computer. The apparatus embodiments may be implemented by software, or may be implemented by hardware or a combination of hardware and software. Taking software implementation as an example, the device in a logic sense is formed by reading corresponding computer program instructions in a nonvolatile memory into a memory by a processor of any device with data processing capability. In terms of hardware, as shown in fig. 6, a hardware structure diagram of an apparatus with optional data processing capability where a workflow construction device based on big data is located in the present invention is shown in fig. 6, and in addition to a processor, a memory, a network interface, and a nonvolatile memory shown in fig. 6, the apparatus with optional data processing capability where an apparatus is located in an embodiment generally includes other hardware according to an actual function of the apparatus with optional data processing capability, which is not described herein again.
The implementation process of the functions and roles of each unit in the above device is specifically shown in the implementation process of the corresponding steps in the above method, and will not be described herein again.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purposes of the present invention. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The embodiment of the invention also provides a computer readable storage medium, on which a program is stored, which when executed by a processor, implements a workflow construction method based on big data in the above embodiment.
The computer readable storage medium may be an internal storage unit, such as a hard disk or a memory, of any of the data processing enabled devices described in any of the previous embodiments. The computer readable storage medium may be any external storage device that has data processing capability, such as a plug-in hard disk, a Smart Media Card (SMC), an SD Card, a Flash memory Card (Flash Card), or the like, which are provided on the device. Further, the computer readable storage medium may include both internal storage units and external storage devices of any data processing device. The computer readable storage medium is used for storing the computer program and other programs and data required by the arbitrary data processing apparatus, and may also be used for temporarily storing data that has been output or is to be output.
The above-described embodiments are intended to illustrate the present invention, not to limit it, and any modifications and variations made thereto are within the spirit of the invention and the scope of the appended claims.

Claims (10)

1. The workflow construction method based on big data is characterized by comprising the following steps:
(1) Business personnel construct a business process of an initial workflow based on visual modeling;
(2) Based on the data bridging component, the node data of the workflow is accessed to the big data platform, so that the workflow and the node execute decoupling, and the node data calculation is operated in the big data platform;
(3) The large data platform generates a corresponding path according to each node in the workflow, is used for storing intermediate data results obtained after node calculation and execution, and stores the intermediate data results in a distributed file storage system hdfs, so that data transmission among nodes is realized, and the input of a subsequent node only needs to read the output path of the last node;
(4) Each node is independently packaged into a task of a corresponding workflow, the scheduling execution of the task of the self-defined node is realized, the interaction is carried out through the data bridging component and the big data platform, and the task state of the workflow is monitored;
(5) After the task is successfully executed, the execution result is stored in the distributed file storage system hdfs for the subsequent nodes to use.
2. The workflow construction method based on big data according to claim 1, wherein when the workflow newly adds the custom component node, the workflow generates the definition JSON corresponding to the newly added custom component node according to the required custom component configuration parameter, renders the definition JSON of the component through the front end, and expands the logic required to be executed by the newly added custom component node through the back end definition to obtain a new node.
3. The workflow construction method based on big data according to claim 2, wherein JSON contains basic information of custom components, including component name, component classification, component icon, component port information, corresponding back-end class information and component attribute information, and component attribute information includes attribute name and type information.
4. The workflow construction method based on big data according to claim 1, wherein the data bridging component is a middleware for interaction between a workflow and a big data platform, and implementing decoupling between the workflow and specific node execution, comprising: submitting a task, monitoring the state of the task, acquiring the execution information of the task and the execution result information, and returning to the flow engine.
5. The workflow construction method based on big data according to claim 1, wherein the nodes of the workflow include data extraction class nodes, data preprocessing class nodes, feature extraction class nodes and machine learning model class nodes.
6. The workflow construction method based on big data according to claim 1, wherein in step (5), when the subsequent node executes, the pre-node is not required to be executed any more, the temporary data can be directly read from the big data platform to execute the node, and various custom node executions can be realized, including running the node, running to the node, starting running from the node, and all running.
7. The workflow construction method based on big data according to claim 1, wherein the intermediate temporary data in the distributed file storage system hdfs during the workflow running process can be presented and exported.
8. A big data based workflow construction system for implementing the workflow construction method of any of claims 1-7, characterized in that the system comprises a workflow construction module, a data bridging module, a big data platform module, a task scheduling module and a result storage module;
the workflow construction module is used for constructing a business flow of an initial workflow through visual modeling;
the data bridging module is used for accessing the node data of the workflow into the big data platform based on the data bridging component, so that the workflow and the node execute decoupling, and the node data calculation is operated in the big data platform;
the large data platform module is used for generating a corresponding path according to each node in the workflow, storing an intermediate data result obtained after node calculation and execution, and storing the intermediate data result in a distributed file storage system hdfs, so that data transmission among nodes is realized, and the input of a subsequent node only needs to read the output path of the last node;
the task scheduling module is used for independently packaging each node into a task of a corresponding workflow, realizing the scheduling execution of the task of the self-defined node, interacting with a big data platform through the data bridging component and monitoring the task state of the workflow;
and the result storage module is used for storing the execution result into the distributed file storage system hdfs for the use of the subsequent nodes after the task execution is successful.
9. A big data based workflow construction apparatus comprising a memory and one or more processors, the memory having executable code stored therein, wherein the processor, when executing the executable code, implements a big data based workflow construction method as claimed in any of claims 1-7.
10. A computer-readable storage medium, on which a program is stored, characterized in that the program, when executed by a processor, implements a workflow construction method based on big data as claimed in any of claims 1-7.
CN202311453854.0A 2023-11-03 2023-11-03 Workflow method, system and device based on big data Pending CN117492952A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311453854.0A CN117492952A (en) 2023-11-03 2023-11-03 Workflow method, system and device based on big data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311453854.0A CN117492952A (en) 2023-11-03 2023-11-03 Workflow method, system and device based on big data

Publications (1)

Publication Number Publication Date
CN117492952A true CN117492952A (en) 2024-02-02

Family

ID=89673816

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311453854.0A Pending CN117492952A (en) 2023-11-03 2023-11-03 Workflow method, system and device based on big data

Country Status (1)

Country Link
CN (1) CN117492952A (en)

Similar Documents

Publication Publication Date Title
CN108280023B (en) Task execution method and device and server
CN106910045B (en) Workflow engine design method and system
CN105719126B (en) system and method for scheduling Internet big data tasks based on life cycle model
CN105630488A (en) Docker container technology-based continuous integration realizing method
CN103646104A (en) Hard real-time fault diagnosis method and system
CN111124379B (en) Page generation method and device, electronic equipment and storage medium
CN112363913B (en) Parallel test task scheduling optimizing method, device and computing equipment
CN114281653B (en) Application program monitoring method and device and computing equipment
CN113535141A (en) Database operation code generation method and device
CN111427665A (en) Quantum application cloud platform and quantum computing task processing method
Bauer et al. Reusing system states by active learning algorithms
Deantoni Modeling the behavioral semantics of heterogeneous languages and their coordination
Schönberger et al. Algorithmic support for model transformation in object‐oriented software development
CN113687927A (en) Method, device, equipment and storage medium for scheduling and configuring flash tasks
CN110908644A (en) Configuration method and device of state node, computer equipment and storage medium
CN117492952A (en) Workflow method, system and device based on big data
CN113495723B (en) Method, device and storage medium for calling functional component
CN112130849B (en) Code automatic generation method and device
CN112418796B (en) Sub-process task node activation method and device, electronic equipment and storage medium
CN111290855B (en) GPU card management method, system and storage medium for multiple GPU servers in distributed environment
CN111274750B (en) FPGA simulation verification system and method based on visual modeling
CN110738384B (en) Event sequence checking method and system
CN113708971A (en) Openstack cloud platform deployment method and related device
CN113722020B (en) Interface calling method, device and computer readable storage medium
Li et al. Specifying Complex Systems in Object-Z: A Case Study of Petrol Supply Systems.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination