CN106920034B - Method and system for mining BPMN (business process modeling notation) compilation flow parallelism - Google Patents

Method and system for mining BPMN (business process modeling notation) compilation flow parallelism Download PDF

Info

Publication number
CN106920034B
CN106920034B CN201710067985.3A CN201710067985A CN106920034B CN 106920034 B CN106920034 B CN 106920034B CN 201710067985 A CN201710067985 A CN 201710067985A CN 106920034 B CN106920034 B CN 106920034B
Authority
CN
China
Prior art keywords
dependency
gateway
tasks
flow
streams
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710067985.3A
Other languages
Chinese (zh)
Other versions
CN106920034A (en
Inventor
代飞
刘妙
王博
谢仲文
赵娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Forestry University
Original Assignee
Southwest Forestry University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Forestry University filed Critical Southwest Forestry University
Priority to CN201710067985.3A priority Critical patent/CN106920034B/en
Publication of CN106920034A publication Critical patent/CN106920034A/en
Application granted granted Critical
Publication of CN106920034B publication Critical patent/CN106920034B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/067Enterprise or organisation modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/35Creation or generation of source code model driven
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0633Workflow analysis

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Development Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Devices For Executing Special Programs (AREA)

Abstract

The invention discloses a method and a system for mining the parallelism of a BPMN compilation process, wherein the method comprises the following steps: extracting the basic relationship between tasks from the compiling flow according to the structural characteristics, and constructing a basic relationship matrix; analyzing the dependency relationship among tasks and constructing a dependency relationship matrix; constructing a dependency graph according to the dependency relationship matrix; and converting the dependency graph according to a conversion rule to obtain a compilation process. The invention changes the task pairs with sequence relation but no data dependence in the BPMN compiling flow from the dependency (dependency) angle to parallel execution so as to achieve the aim of improving the compiling efficiency. Belonging to the field of business process model reconstruction in business process management.

Description

Method and system for mining BPMN (business process modeling notation) compilation flow parallelism
Technical Field
The invention relates to the field of business process management and business process model reconstruction, in particular to a method and a system for mining BPMN compilation flow parallelism, so as to achieve the aim of improving compilation efficiency.
Background
With the widespread use of BPM technology, enterprises have modeled more and more compilation processes using BPMN 2.0(Business Process Modeling Notation 2.0). Because the modeling business process is time-consuming and error-prone, and the capabilities of different modelers vary, the quality of the compilation process also varies greatly.
After modeling a compilation process, the modeler needs to consider a problem: how efficient is the compilation process? In an actual business process, certain tasks may be performed concurrently. However, in the modeled compilation process, these tasks are performed serially. Therefore, how to change the serial executed task in the compilation process into the parallel executed task to improve the efficiency of the compilation process becomes a difficult point in the research of the business process reconstruction field.
Although the literature proposes a method of changing the transition of serial execution into the transition of parallel execution in the Petri net, no work has been found yet to change the task of serial execution of the compilation flow in the BPMN into the task of parallel execution. At present, for a business process described by a Petri network, a method is proposed in the prior literature from the perspective of data reading and writing: and changing the transition executed in series in the Petri net into the transition executed in parallel. For the software evolution process described by the Petri network, the existing literature also provides a method from the correlation point of view: and changing the transition executed in series in the Petri net into the transition executed in parallel. Basic modeling elements with Petri nets: compared with a library, a transition, an arc and a Token, the BPMN has richer modeling elements and comprises: the method comprises the steps of starting an event, ending an event, a task, a gateway and a sequence flow, but the method proposed by the existing literature cannot be directly applied to the BPMN compiling flow at present so as to improve the efficiency of the compiling flow. Therefore, a method for mining the parallelism of the BPMN compiling flow is needed.
Disclosure of Invention
The invention aims to solve the technical problem of a method for mining the parallelism of a BPMN compiling flow, which is to change serial execution of task pairs with a sequence relation but no data dependence in the BPMN compiling flow from the dependency (dependency) angle into parallel execution.
The invention provides a method for mining the parallelism of a BPMN compiling flow, which solves the technical problem and comprises the following steps:
extracting the basic relationship between tasks from the compiling flow according to the structural characteristics, and constructing a basic relationship matrix;
analyzing the dependency relationship among tasks and constructing a dependency relationship matrix;
constructing a dependency graph according to the dependency relationship matrix;
and converting the dependency graph according to a conversion rule to obtain a compilation process.
Still further, the method further comprises: preprocessing a programming flow: and converting the compiling flows with different structures but the same semantics into compiling flows with unified structures according to the preprocessing rules.
Still further, the pre-processing rules include at least:
for the start event, if the start event has a plurality of output streams, connecting the start event with the plurality of output streams through the parallel forking gateways to convert the start event into the start event with one output stream;
and/or, for an end event, if the end event has a plurality of input streams, connecting the end event with the plurality of input streams through the exclusive event merging gateway to convert the end event into an end event with one input stream; and/or, for a task, if the task has a plurality of input streams, connecting the task with the plurality of input streams through the exclusive data merging gateway to convert the task into the task with one input stream;
if the task has a plurality of output streams, the task is connected with the plurality of output streams through the parallel bifurcation gateways so as to convert the task into the task with one output stream.
And/or nesting gateways, and if the parallel gateway with a plurality of input streams and the parallel gateway with a plurality of output streams are directly nested and connected, converting the two parallel gateways into the parallel gateway with a plurality of input streams and a plurality of outputs;
if the exclusive event gateway with a plurality of input streams and the exclusive event gateway with a plurality of output streams are directly nested and connected, the two exclusive data gateways are converted into the exclusive data gateway with a plurality of input streams and a plurality of outputs.
Further, extracting the relationships among all tasks in the orchestration process includes: and the unique relationship between the two tasks in the sequence relationship, the selection relationship and the concurrency relationship.
Further, analyzing the dependency relationship among the tasks specifically includes the following steps:
5-1) analyzing positive dependence among tasks,
5-2) analyzing the inverse dependence among tasks,
5-3) analyzing output dependence among tasks,
5-4) analyzing control dependence among tasks.
Further, the step of constructing the dependency graph according to the dependency relationship matrix specifically includes:
6-1) constructing nodes. Constructing each task in the compiling flow into a node;
6-2) constructing an arc, and if a positive dependency or an inverse dependency or an output dependency or a control dependency is met between two tasks, adding the arc between nodes corresponding to the two tasks;
6-3) identifying the dependency type on the arc, and if the positive dependency or the inverse dependency or the output dependency or the control dependency is satisfied between the two tasks, adding a corresponding mark on the arc between the nodes corresponding to the two tasks.
Further, preprocessing the dependency graph, and simplifying the dependency graph according to a simplification rule:
7-1) simplifying the semantics of the dependencies,
7-2) eliminating redundant delivery data dependencies,
7-3) adding a start node and a corresponding arc and an end node and a corresponding arc.
Further, the converting the dependency graph according to the conversion rule to obtain the compilation process specifically includes:
converting nodes except the starting node and the ending node in the dependency graph into tasks in a compiling flow;
and/or converting a single arc in the dependency graph, wherein the dependency type is data dependency, into a sequence flow in the programming flow;
and/or converting a starting node in the dependency graph into a starting event in the programming flow;
and/or converting the end node in the dependency graph into an end event in the compilation flow;
and/or, converting the branch arcs in the dependency graph, wherein the dependency types are data dependency and no arc return exists, into parallel branch gateways with single input sequence flow and multiple output sequence flow in the compilation process;
and/or, converting the bifurcate arcs in the dependency graph, wherein the dependency types are control dependency and non-existing arc return, into an exclusive data decision gateway with a single input sequence flow and a multiple output sequence flow in the compilation process;
and/or converting the convergence arcs in the dependency graph, wherein the dependency types are data dependency and do not have arc return, into a parallel convergence gateway with a multi-input sequence flow and a single-output sequence flow in the compiling flow;
and/or converting the convergence arcs in the dependency graph, wherein the dependency types are control dependency and do not have arc return, into an exclusive data merging gateway with a multi-input sequence flow and a single-output sequence flow in the compiling flow;
and/or converting the number of the divergent arcs in the dependency graph, which are different in dependency type and have no arc return and control dependency arcs > 2 into a parallel divergent gateway and an exclusive data decision gateway which are directly nested and connected, wherein the parallel divergent gateway is provided with 1 input sequence stream and 2 output sequence streams; the exclusive data decision gateway carries 1 input sequence stream and 2 output sequence streams;
and/or converting the aggregation arcs in the dependency graph, which are different in dependency type and control the number of the dependency arcs > < 2, into an exclusive data merging gateway and a parallel aggregation gateway in a compiling process, wherein the exclusive data merging gateway is connected with 2 input sequence streams and 1 output sequence stream in a nested manner; the parallel convergence gateway takes 2 input sequence streams and 1 output sequence stream.
Based on the above, the present invention provides a system for mining the parallelism of BPMN compiling process, which comprises:
the relation extractor is used for extracting the relation among all tasks from the compiling flow according to the structural characteristics and constructing a task relation matrix;
the dependency relationship analyzer is used for analyzing the dependency relationship among the tasks and constructing a dependency relationship matrix;
the dependency graph constructor is used for constructing a dependency graph according to the dependency relationship matrix;
and the converter is used for converting the dependency graph according to a conversion rule to obtain a compilation process.
Still further, the system further comprises:
the preprocessor is used for converting compiling flows with different structures but the same semantics into compiling flows with unified structures according to preprocessing rules;
and the dependency graph simplifying device is used for simplifying the dependency graph according to the simplifying rule.
The invention has the beneficial effects that:
the main differences between the present invention and the prior art (e.g., the documents Jin T, Wang J, Yang Y, et al. Refactor Business Processes Models with maximum simulated parallel [ J ]. IEEE Transactions on Services Computing,2016,9(3):456 + 468 and Li T. an Approach to modeling Software Evolution Processes [ M ], Tsinghua University Press,2008) are:
in the prior art, regarding a Business Process based on Petri network description, from the perspective of data reading and writing, a task pair having a sequential relationship but not having a read-write dependent operation in an original Business Process is changed into a concurrent relationship in terms of the sequential relationship between tasks, and an alpha algorithm is used to reconstruct the original Business Process.
Specifically, the present invention is distinguished from the following aspects: 1) the study subjects were different: the object of the document ([ Jin T, Wang J, Yang Y, et al. Refactor Business Process Models with maximally dispersed parallelisms [ J ]. IEEE Transactions on Services Computing,2016]) is the Petri net, while the object of the present invention is the BPMN compilation Process. Compared with a Petri network, modeling elements in the BPMN compiling process are richer, and the method provided by the document cannot be directly applied to the BPMN compiling process so as to improve the efficiency of the compiling process. The method for reconstructing the business process is different: the literature ([ Jin T, Wang J, Yang Y, et al.Refactor Business Process Models with maximum simulated parallelisms [ J ]. IEEE Transactions on Services Computing,2016]) reconstructs the Petri network by using an alpha algorithm, and the invention reconstructs the BPMN Business Process by using a dependency graph and a conversion rule.
In the prior art method document ([ Li t.an Approach to modeling Software Evolution Processes [ M ], Tsinghua University Press,2008]) aiming at the Software Evolution process defined based on the Petri network, from the correlation perspective, a task pair having a sequence relationship but no correlation relationship in the original Software Evolution process is changed into a concurrence relationship, and the original Software Evolution process is reconstructed by using a correlation diagram and a conversion rule. The present invention is distinguished from the following aspects:
1) the study subjects were different: the research object of the document ([ Li t. an Approach to modeling Software Evolution Processes [ M ], Tsinghua University Press,2008]) is a Petri net, and the research object of the present invention is a BPMN compilation flow. Compared with a Petri network, modeling elements in the BPMN compiling process are richer, and the method provided by the document cannot be directly applied to the BPMN compiling process so as to improve the efficiency of the compiling process.
2) The conversion rules are different: the document ([ Li t.an Approach to modeling Software Evolution Processes [ M ], Tsinghua University Press,2008]) proposes three conversion rules for converting a correlation graph into a Petri net, and the present application proposes ten conversion rules for converting a dependency graph into a BPMN compilation flow.
3) The ways to analyze the relationships between tasks are different: the correlation among tasks in the original Software Evolution process is analyzed in an artificial mode in a document ([ Li T.An Approach to modeling Software Evolution Processes [ M ], Tsinghua University Press,2008]), and the dependency among tasks in the original BPMN compilation process is automatically extracted through a dependency analyzer by using structural characteristics.
In addition, the technology of the invention has wide application background in the aspects of improving the quality and the efficiency of the business process model.
Drawings
FIG. 1 is a schematic flow chart of a method in one embodiment of the present invention;
FIG. 2 is a schematic diagram of a system architecture in an embodiment of the invention;
fig. 3 is a schematic diagram of the system architecture in a preferred embodiment of the invention.
FIGS. 4(a) -4 (d) are schematic diagrams of the preprocessing rules in the present invention;
FIGS. 5(a) -5 (d) are schematic diagrams of the sequence relationships and selection relationships of the features of the present invention;
FIGS. 6(a) -6 (j) are schematic diagrams of the transformation rules in the present invention.
Detailed Description
The principles of the present disclosure will now be described with reference to a few exemplary embodiments. It is understood that these examples are described solely for the purpose of illustration and to assist those of ordinary skill in the art in understanding and working the disclosure, and are not intended to suggest any limitation as to the scope of the disclosure. The disclosure described herein may be implemented in various ways other than those described below.
As used herein, the term "include" and its various variants are to be understood as open-ended terms, which mean "including, but not limited to. The term "based on" may be understood as "based at least in part on". The term "one embodiment" may be understood as "at least one embodiment". The term "another embodiment" may be understood as "at least one other embodiment".
Fig. 1 is a schematic method flow diagram in an embodiment of the present invention, where a method for mining parallelism of BPMN compiling flows in the embodiment includes the following steps: step S100, extracting the basic relationship between tasks from the compiling flow according to the structural characteristics, and constructing a basic relationship matrix; step S101, analyzing the dependency relationship among tasks and constructing a dependency relationship matrix; step S102, constructing a dependency graph according to the dependency relationship matrix; and step S103, converting the dependency graph according to a conversion rule to obtain a compilation process.
In step S103, the present embodiment proposes ten conversion rules for converting the dependency graph into the BPMN compiling flow.
In step S100, the embodiment uses the structural features to automatically extract the dependency relationship between tasks in the original BPMN compilation process through the dependency relationship analyzer.
Different from the prior art that the Petri network is reconstructed by using an alpha algorithm, in the embodiment, a BPMN business process is reconstructed by using a dependency graph and a conversion rule.
As a preference in the present embodiment, the method in the present embodiment further includes: preprocessing a programming flow: and converting the compiling flows with different structures but the same semantics into compiling flows with unified structures according to the preprocessing rules. The pre-processing rules include at least: for the start event, if the start event has a plurality of output streams, connecting the start event with the plurality of output streams through the parallel forking gateways to convert the start event into the start event with one output stream;
and/or, for an end event, if the end event has a plurality of input streams, connecting the end event with the plurality of input streams through the exclusive event merging gateway to convert the end event into an end event with one input stream; and/or, for a task, if the task has a plurality of input streams, connecting the task with the plurality of input streams through the exclusive data merging gateway to convert the task into the task with one input stream;
if the task has a plurality of output streams, the task is connected with the plurality of output streams through the parallel bifurcation gateways so as to convert the task into the task with one output stream.
And/or nesting gateways, and if the parallel gateway with a plurality of input streams and the parallel gateway with a plurality of output streams are directly nested and connected, converting the two parallel gateways into the parallel gateway with a plurality of input streams and a plurality of outputs;
if the exclusive event gateway with a plurality of input streams and the exclusive event gateway with a plurality of output streams are directly nested and connected, the two exclusive data gateways are converted into the exclusive data gateway with a plurality of input streams and a plurality of outputs.
The preprocessing process includes, but is not limited to, the following event processing:
1) preprocessing start events
2) Preprocessing end events
3) Preprocessing tasks
4) Preprocessing gateway nesting
Specific preprocessing rules are shown in fig. 4(a) -4 (d).
As a preference in the present embodiment, extracting the basic relationship between tasks in the programming includes: sequential relationships, selection relationships, concurrent relationships. There are four dependencies of the dependency matrix: positive dependency, inverse dependency, output dependency and control dependency. If a sequential relationship exists between two tasks in the compilation process, but no positive dependency, inverse dependency or output dependency exists, the two tasks are considered not to really have the sequential relationship, and the two tasks can be reconstructed into a parallel structure through subsequent steps.
The above process is as follows: 1) and extracting the sequence relation among the tasks. In the arrangement process, if two tasks (t)1And t2) There are two cases where there is a sequential relationship between: the first two tasks are connected in close proximity, as shown in fig. 5(a), and are structurally characterized in that: t is t1 ·={t2}∧·t2={t1}∧(t1,t2) E.g. SF. In the second type, two tasks are connected through a parallel gateway, as shown in fig. 5(b), the structure of the method is characterized in that: (t)1,g),(g,t2)∈SF∧t1 ·={g}∧·t2={g}∧g∈GPF∪GPJ. 2) And extracting the selection relation among the tasks. In the arrangement process, if two tasks (t)1And t2) There are two cases where there is a selective relationship between: first of these two tasks (t)1And t2) Connected through an exclusive data gateway, as shown in fig. 5(c), the structural features are: (t)1,g),(g,t2)∈SF∧t1 ·={g}∧·t2={g}∧g∈GXD∪GDM. In the second type, two tasks are connected through a parallel data gateway and an exclusive gateway, as shown in fig. 5(d), the structure is characterized in that: (t)1,g1),(g1,g2),(g2,t2)∈SF∧t1 ·={g1}∧·t2={g2}∧g1∈GPF∪GPJ∧g2∈GXD∪GDM. In the arrangement process, because the relationships between the tasks are only three, if the relationship between the two tasks is not a sequential relationship or a selective relationship, the relationship between the two tasks is necessarily a concurrent relationship.
As a preference in this embodiment, analyzing the dependency relationship among tasks specifically includes the following steps:
5-1) analyzing positive dependence among tasks,
5-2) analyzing the inverse dependence among tasks,
5-3) analyzing output dependence among tasks,
5-4) analyzing control dependence among tasks.
The step 3 is realized by a dependency analyzer and gives the task t1,t2And the data gateway g comprises the following processes:
1) positive dependencies between tasks are analyzed. If it is
Figure BDA0001221698110000081
Then task t1And t2And satisfies positive dependence.
2) And analyzing the anti-dependence among tasks. If it is
Figure BDA0001221698110000082
Then task t1And t2And satisfies the inverse dependence.
3) And analyzing output dependence among tasks. If it is
Figure BDA0001221698110000083
Then task t1And t2And the output dependency is satisfied.
4) And analyzing control dependence among tasks. If it is·t1∩t2 ·If the data gateway is left as { g }. lambada g ∈ then the task t1And t2And control dependence is satisfied. Wherein "→" are a sequential relationship,
Figure BDA0001221698110000084
is the sequential relationship of the transfer.
Further, the step of constructing the dependency graph according to the dependency relationship matrix specifically includes:
6-1) constructing nodes. Constructing each task in the compiling flow into a node;
6-2) constructing an arc, and if a positive dependency or an inverse dependency or an output dependency or a control dependency is met between two tasks, adding the arc between nodes corresponding to the two tasks;
6-3) identifying the dependency type on the arc, and if the positive dependency or the inverse dependency or the output dependency or the control dependency is satisfied between the two tasks, adding a corresponding mark on the arc between the nodes corresponding to the two tasks.
The process is as follows: 1) and constructing a node. And constructing each task in the programming flow into a node. 2) An arc is constructed. If task t1And t2If the positive dependency or the inverse dependency or the output dependency or the control dependency is satisfied, an arc is added between the nodes corresponding to the two tasks. 3) The dependency type on the arc is identified. If task t1And t2If the positive dependency or the inverse dependency or the output dependency or the control dependency is satisfied, adding a corresponding mark on an arc between nodes corresponding to the two tasks: delta, delta,
Figure BDA0001221698110000091
δoAnd deltac
Further, preprocessing the dependency graph, and simplifying the dependency graph according to a simplification rule:
7-1) simplifying the semantics of dependencies, there are three types of data dependencies in the dependency graph: positive, inverse and output dependencies. When a dependency graph is converted into a compilation flow, any two tasks cannot be executed concurrently as long as any data dependency exists between the two tasks. Therefore, for the conversion compilation process, the semantics of the positive dependency type, the inverse dependency type and the output dependency type in the dependency graph are not different, and all the semantics can be unified into the data dependency deltad. The treatment process of the step comprises the following steps: the dependency type of each arc in the dependency graph is changed into data dependency deltadAnd (4) finishing.
7-2) eliminating redundant passing data dependencies, in a dependency graph the passing data dependencies can be described by a dependency relationship, which is thus redundant. The treatment process of the step comprises the following steps: and deleting the arcs corresponding to the data dependencies which exist in the dependency graph and are transmitted.
7-3) adding a start node and a corresponding arc and an end node and a corresponding arc. The added start node will be converted into a start event in the compilation process in the future, and the added end node will be converted into an end event in the compilation process in the future. The treatment process of the step comprises the following steps: in the dependency graph, a starting node s and a corresponding arc are added before a node with an in-degree of 0, and the dependency type of the arc is set as data dependency; and adding an end node e and a corresponding arc after the node with the out degree of 0, wherein the dependency type of the arc is set as data dependency.
As a preferred embodiment in the present embodiment, converting the dependency graph according to the conversion rule to obtain the compilation process specifically includes:
as shown in fig. 6(a), nodes except for the start node and the end node in the dependency graph are converted into tasks in the compilation flow;
and/or, as shown in fig. 6(b), converting a single arc in the dependency graph, wherein the dependency type is data dependency, into a sequence flow in the compilation flow;
and/or, as shown in fig. 6(c), converting the start node in the dependency graph into a start event in the compilation flow;
and/or, as shown in fig. 6(d), converting the end node in the dependency graph into an end event in the compilation flow;
and/or, as shown in fig. 6(e), converting the bifurcated arcs in the dependency graph, where the dependency types are data dependency and no arc return exists, into parallel bifurcated gateways with single-input sequence flow and multiple-output sequence flow in the compilation process;
and/or, as shown in fig. 6(f), converting the divergent arcs in the dependency graph, and the dependency types are both control dependencies and no arc returns, into an exclusive data decision gateway with a single input sequence flow and a multiple output sequence flow in the compilation process;
and/or, as shown in fig. 6(g), converting the aggregation arcs in the dependency graph, wherein the dependency types are data dependency and no arc return exists, into a parallel aggregation gateway with multiple input sequence streams and single output sequence streams in the compilation flow;
and/or, as shown in fig. 6(h), converting the aggregation arcs in the dependency graph, wherein the dependency types are control dependencies and no arc returns, into an exclusive data merging gateway with multiple input sequence streams and single output sequence streams in the compilation flow;
and/or as shown in fig. 6(i), converting the bifurcated arcs in the dependency graph, which are different in dependency type and have no arc return and control dependency arc number > 2, into a parallel bifurcated gateway and an exclusive data decision gateway, which are directly nested and connected, wherein the parallel bifurcated gateway has 1 input sequence stream and 2 output sequence streams; the exclusive data decision gateway carries 1 input sequence stream and 2 output sequence streams;
and/or as shown in fig. 6(j), converting the aggregation arcs in the dependency graph, where the dependency types are different and the number of control dependency arcs > is 2, into an exclusive data merging gateway and a parallel aggregation gateway in the compilation process, where the exclusive data merging gateway has 2 input sequence streams and 1 output sequence stream; the parallel convergence gateway takes 2 input sequence streams and 1 output sequence stream.
Fig. 2 is a schematic structural diagram of a system in an embodiment of the present invention, in which a system for mining parallelism of BPMN orchestration flows includes:
the relation extractor 1 is used for extracting the basic relation among tasks from the compiling flow according to the structural characteristics and constructing a basic relation matrix; the relationship extractor 1 comprises the following processes: 1) and extracting the sequence relation among the tasks. In the arrangement process, if two tasks (t)1And t2) There are two cases where there is a sequential relationship between: the first two tasks are closely connected, and the structure is characterized in that: t is t1 ·={t2}∧·t2={t1}∧(t1,t2) E.g. SF. In the second type, two tasks are connected through a parallel gateway, and the structure is characterized in that: (t)1,g),(g,t2)∈SF∧t1 ·={g}∧·t2={g}∧g∈GPF∪GPJ. 2) And extracting the selection relation among the tasks. In the arrangement process, if two tasks (t)1And t2) There are two cases where there is a selective relationship between: first of these two tasks (t)1And t2) Connected through an exclusive data gateway, whichThe structure is characterized in that: (t)1,g),(g,t2)∈SF∧t1 ·={g}∧·t2={g}∧g∈GXD∪GDM. The second kind, two tasks are connected through the parallel data gateway and the exclusive gateway, and its structure characteristic is: (t)1,g1),(g1,g2),(g2,t2)∈SF∧t1 ·={g1}∧·t2={g2}∧g1∈GPF∪GPJ∧g2∈GXD∪GDM. In the arrangement process, because the relationships between the tasks are only three, if the relationship between the two tasks is not a sequential relationship or a selective relationship, the relationship between the two tasks is necessarily a concurrent relationship.
The dependency relationship analyzer 2 is used for analyzing the dependency relationship among the tasks and constructing a dependency relationship matrix;
implemented by the relationship analyser 2, giving task t1,t2And the data gateway g comprises the following processes: 1. positive dependencies between tasks are analyzed. If it is
Figure BDA0001221698110000111
Then task t1And t2And satisfies positive dependence. 2) And analyzing the anti-dependence among tasks. If it is
Figure BDA0001221698110000112
Figure BDA0001221698110000113
Then task t1And t2And satisfies the inverse dependence. 3) And analyzing output dependence among tasks. If it is
Figure BDA0001221698110000114
Then task t1And t2And the output dependency is satisfied. 4) And analyzing control dependence among tasks. If it is·t1∩t2 ·If the data gateway is left as { g }. lambada g ∈ then the task t1And t2And control dependence is satisfied. Wherein "→" are off in sequenceIn the manufacturing method, a first step of a manufacturing process,
Figure BDA0001221698110000115
is the sequential relationship of the transfer.
A dependency graph constructor 3 for constructing a dependency graph according to the dependency relationship matrix; the dependency graph constructor 3 processes as follows: 1) and constructing a node. And constructing each task in the programming flow into a node. 2) An arc is constructed. If task t1And t2If the positive dependency or the inverse dependency or the output dependency or the control dependency is satisfied, an arc is added between the nodes corresponding to the two tasks. 3) The dependency type on the arc is identified. If task t1And t2If the positive dependency or the inverse dependency or the output dependency or the control dependency is satisfied, adding a corresponding mark on an arc between nodes corresponding to the two tasks: delta, delta,
Figure BDA0001221698110000121
δoAnd deltac
The converter 4 is configured to convert the dependency graph according to a conversion rule to obtain a compilation process, and includes at least the following 10 rules.
1) Rule one is used to convert nodes in the dependency graph other than the start node and the end node into tasks in the orchestration flow.
2) Rule two is used to apply a single arc in the dependency graph, and the dependency type is δdAnd converting the sequence flow into a sequence flow in the programming flow.
3) The rule III is used for converting the starting node in the dependency graph into a starting event in the compiling flow;
4) rule four is used to convert the end node in the dependency graph to an end event in the compilation flow
5) Rule five is used for the divergent arcs in the dependency graph, and the dependency types are all deltadAnd the parallel branch gateway is converted into a single input sequence flow and a multi-output sequence flow in the programming process without the arc return.
6) Rule six is used for the divergent arcs in the dependency graph, and the dependency types are all deltacAnd no arc return exists, and the input sequence is converted into a single input sequence in the programming processExclusive data decision gateways for column streams and multiple output sequence streams.
7) Rule seven is used for converging arcs in the dependency graph, and the dependency types are all deltadAnd the parallel convergence gateway is converted into a parallel convergence gateway with a multi-input sequence flow and a single-output sequence flow in the compiling flow without the arc return.
(8) Rule eight is used for converging arcs in the dependency graph, and the dependency types are all deltacAnd the data is converted into an exclusive data merging gateway with a multi-input sequence flow and a single-output sequence flow in the compiling flow without arc return.
(9) And the rule nine is used for converting the forked arcs in the dependency graph, which have different dependency types and do not have the number of the back arcs and the control dependent arcs > 2 into a parallel forked gateway and an exclusive data decision gateway which are directly connected in a nested manner. Wherein, the parallel bifurcation gateway takes 1 input sequence flow and 2 output sequence flows; exclusive data decision gateway with 1 input sequence stream and 2 output sequence streams
(10) And the rule ten is used for converting the aggregation arcs in the dependency graph, wherein the dependency types are different and the number of the control dependency arcs is 2 into an exclusive data merging gateway and a parallel aggregation gateway in the compiling process, and the exclusive data merging gateway and the parallel aggregation gateway are connected in a nested mode. Wherein, the exclusive data merging gateway has 2 input sequence streams and 1 output sequence stream; the parallel convergence gateway takes 2 input sequence streams and 1 output sequence stream.
Fig. 3 is a schematic structural diagram of a system in a preferred embodiment of the present invention, and as a preference in this embodiment, the system further includes:
the preprocessor 5 is further used for converting compiling flows with different structures but the same semantics into compiling flows with a unified structure according to preprocessing rules; the preprocessor 5 comprises the following processes: 1) a start event is preconditioned. If the start event has a plurality of output streams, the start event can be converted into a start event having one output stream by connecting the plurality of output streams through the parallel forking gateways. 2) An end event is pre-processed. If the end event has a plurality of input streams, the end event can be converted into an end event with one input stream by connecting the end event with the plurality of input streams through the exclusive data (event) merging gateway. 3) And (5) preprocessing tasks. If the task has a plurality of input streams, the input streams can be connected with the plurality of input streams through the exclusive data merging gateway, so that the task is converted into a task with one input stream; if the task has a plurality of output streams, the task can be connected with the plurality of output streams through the parallel forking gateway so as to convert the task into the task with one output stream. 4) And preprocessing gateway nesting. If the parallel gateway with a plurality of input streams and the parallel gateway with a plurality of output streams are directly nested and connected, the two parallel gateways can be converted into the parallel gateway with a plurality of input streams and a plurality of outputs; if an exclusive data gateway with a plurality of input streams and an exclusive data gateway with a plurality of output streams are directly nested and connected, the two exclusive data gateways can be converted into the exclusive data gateway with a plurality of input streams and a plurality of outputs.
And a dependency graph simplifier 6 for simplifying the dependency graph according to a simplification rule, wherein the graph simplifier 6 comprises the following procedures: 1) simplifying the semantics of the dependencies. There are three types of data dependencies in the dependency graph: positive, inverse and output dependencies. When a dependency graph is converted into a compilation flow, any two tasks cannot be executed concurrently as long as any data dependency exists between the two tasks. Therefore, for the conversion compilation process, the semantics of the positive dependency type, the inverse dependency type and the output dependency type in the dependency graph are not different, and all the semantics can be unified into the data dependency deltad. The treatment process of the step comprises the following steps: the dependency type of each arc in the dependency graph is changed into data dependency deltadAnd (4) finishing. 2) Redundant delivery data dependencies are eliminated. In a dependency graph, the data dependencies passed can be described by a dependency relationship, and thus the relationship is redundant. The treatment process of the step comprises the following steps: and deleting the arcs corresponding to the data dependencies which exist in the dependency graph and are transmitted. 3) The start node and corresponding arc and end node and corresponding arc are added. The added start node will be converted into a start event in the compilation process in the future, and the added end node will be converted into an end event in the compilation process in the future. The treatment process of the step comprises the following steps: in the dependency graph, a starting node s and a corresponding arc are added before a node with an in-degree of 0, and the dependency type of the arc is set as data dependency; adding an end node e and a phase after the node with the out degree of 0And (4) setting the dependency type of the corresponding arc as data dependency.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
In general, the various embodiments of the disclosure may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. Some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device. While various aspects of the disclosure may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that the blocks, apparatus, systems, techniques or methods described herein may be implemented in, without limitation, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
Further, while operations are described in a particular order, this should not be understood as requiring that such operations be performed in the order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In some cases, multitasking or parallel processing may be advantageous. Similarly, while details of several specific implementations are included in the above discussion, these should not be construed as any limitation on the scope of the disclosure, but rather the description of features is directed to specific embodiments only. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.

Claims (7)

1. A method for mining the parallelism of BPMN compiling processes is characterized by comprising the following steps:
extracting the basic relationship between tasks from the compiling flow according to the structural characteristics, and constructing a basic relationship matrix;
analyzing the dependency relationship among tasks and constructing a dependency relationship matrix;
constructing a dependency graph according to the dependency relationship matrix;
converting the dependency graph according to a conversion rule to obtain a compilation process, which comprises the following steps:
converting nodes except the starting node and the ending node in the dependency graph into tasks in a compiling flow;
converting a single arc in the dependency graph, wherein the dependency type is data dependency, into a sequence flow in the compiling flow;
converting a starting node in the dependency graph into a starting event in the compilation flow;
converting the end node in the dependency graph into an end event in the compilation flow;
converting the branch arcs in the dependency graph, the dependency types of which are data dependency and non-return arc, into parallel branch gateways with single input sequence flow and multiple output sequence flow in the compilation process;
converting the forked arcs in the dependency graph, the dependency types of which are control dependency and non-existing arc return, into an exclusive data decision gateway with a single input sequence flow and a multi-output sequence flow in the compilation process;
converting the convergence arcs in the dependency graph, wherein the dependency types are data dependency and no arc return exists, into a parallel convergence gateway with a multi-input sequence flow and a single-output sequence flow in the compiling flow;
converting the convergence arcs in the dependency graph, wherein the dependency types are control dependency and no arc return exists, into an exclusive data merging gateway with a multi-input sequence flow and a single-output sequence flow in the compiling flow;
converting the number n (n = 2) of the branch arcs in the dependency graph, which are different in dependency type and have no arc return and control dependency arcs, into a parallel branch gateway and an exclusive data decision gateway which are directly nested and connected, wherein the parallel branch gateway is provided with 1 input sequence stream and 2 output sequence streams; the exclusive data decision gateway carries 1 input sequence stream and 2 output sequence streams;
converting aggregation arcs in a dependency graph, wherein the dependency types are different and the number n (n = 2) of the dependency arcs is controlled into an exclusive data merging gateway and a parallel aggregation gateway in a compiling process, wherein the exclusive data merging gateway is connected with 2 input sequence streams and 1 output sequence stream in a nested manner, and each input sequence stream represents 1 aggregation arc; the parallel convergence gateway takes 2 input sequence streams and 1 output sequence stream.
2. The method of claim 1, further comprising: preprocessing a programming flow: and converting the compiling flows with different structures but the same semantics into compiling flows with unified structures according to the preprocessing rules.
3. The method according to claim 2, characterized in that the pre-processing rules comprise at least:
for the start event, if the start event has a plurality of output streams, connecting the start event with the plurality of output streams through the parallel forking gateways to convert the start event into the start event with one output stream;
and/or, for an end event, if the end event has a plurality of input streams, connecting the end event with the plurality of input streams through the exclusive event merging gateway to convert the end event into an end event with one input stream; and/or, for a task, if the task has a plurality of input streams, connecting the task with the plurality of input streams through the exclusive data merging gateway to convert the task into the task with one input stream;
if the task has a plurality of output streams, connecting the task with the plurality of output streams through the parallel bifurcation gateways so as to convert the task into the task with one output stream;
and/or nesting gateways, and if the parallel gateway with a plurality of input streams and the parallel gateway with a plurality of output streams are directly nested and connected, converting the two parallel gateways into the parallel gateway with a plurality of input streams and a plurality of outputs;
if the exclusive event gateway with a plurality of input streams and the exclusive event gateway with a plurality of output streams are directly nested and connected, the two exclusive data gateways are converted into the exclusive data gateway with a plurality of input streams and a plurality of outputs.
4. The method of claim 1, wherein extracting relationships among all tasks in an orchestration flow comprises: sequential relationships, selection relationships, and unique relationships between two tasks in a concurrent relationship.
5. The method according to claim 1, wherein analyzing dependencies among tasks specifically comprises the steps of:
5-1) analyzing positive dependence among tasks,
5-2) analyzing the inverse dependence among tasks,
5-3) analyzing output dependence among tasks,
5-4) analyzing control dependence among tasks.
6. The method according to claim 1, wherein the step of constructing the dependency graph from the dependency matrix specifically comprises:
6-1) constructing nodes, and constructing each task in the compiling flow into one node;
6-2) constructing an arc, and if a positive dependency or an inverse dependency or an output dependency or a control dependency is met between two tasks, adding the arc between nodes corresponding to the two tasks;
6-3) identifying the dependency type on the arc, and if the positive dependency or the inverse dependency or the output dependency or the control dependency is satisfied between the two tasks, adding a corresponding mark on the arc between the nodes corresponding to the two tasks.
7. The method according to claim 1 or 6, characterized in that the dependency graph is preprocessed and the dependency graph is simplified according to a simplification rule:
7-1) simplifying the semantics of the dependencies,
7-2) eliminating redundant delivery data dependencies,
7-3) adding a start node and a corresponding arc and an end node and a corresponding arc.
CN201710067985.3A 2017-02-07 2017-02-07 Method and system for mining BPMN (business process modeling notation) compilation flow parallelism Expired - Fee Related CN106920034B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710067985.3A CN106920034B (en) 2017-02-07 2017-02-07 Method and system for mining BPMN (business process modeling notation) compilation flow parallelism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710067985.3A CN106920034B (en) 2017-02-07 2017-02-07 Method and system for mining BPMN (business process modeling notation) compilation flow parallelism

Publications (2)

Publication Number Publication Date
CN106920034A CN106920034A (en) 2017-07-04
CN106920034B true CN106920034B (en) 2021-04-30

Family

ID=59453558

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710067985.3A Expired - Fee Related CN106920034B (en) 2017-02-07 2017-02-07 Method and system for mining BPMN (business process modeling notation) compilation flow parallelism

Country Status (1)

Country Link
CN (1) CN106920034B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108256814A (en) * 2017-08-18 2018-07-06 平安科技(深圳)有限公司 Item information processing method, device, server and storage medium
CN107545156B (en) * 2017-09-19 2020-03-10 广东工业大学 Software protection technology application sequence construction method based on Petri network
CN109213587B (en) * 2018-09-12 2021-11-09 中国人民解放军战略支援部队信息工程大学 Multi-Stream parallel DAG graph task mapping strategy under GPU platform
CN112839109B (en) * 2021-03-04 2022-07-01 广州市品高软件股份有限公司 Cloud resource arranging method based on cloud function and BPMN (Business Process management) specification

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102650953A (en) * 2011-02-28 2012-08-29 北京航空航天大学 Concurrently-optimized BPMN (Business Process Modeling Notation) combined service execution engine and method
CN106203851A (en) * 2016-07-15 2016-12-07 云南大学 A kind of control stream consistency detecting method towards BPMN2.0 formatting model and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8689060B2 (en) * 2011-11-15 2014-04-01 Sap Ag Process model error correction

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102650953A (en) * 2011-02-28 2012-08-29 北京航空航天大学 Concurrently-optimized BPMN (Business Process Modeling Notation) combined service execution engine and method
CN106203851A (en) * 2016-07-15 2016-12-07 云南大学 A kind of control stream consistency detecting method towards BPMN2.0 formatting model and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Business Process Model and Notation;OBJECT MANAGEMENT GROUP;《http://www.omg.org/spec/BPMN/2.0》;20140131;第42-44、287-300页 *
业务过程模型检索与重构;金涛;《中国博士学位论文全文数据库 信息科技辑》;20140715(第07期);第I138-24页 *
多核平台下串行程序的并行化改造;张鹏;《中国优秀硕士学位论文全文数据库 信息科技辑》;20160315(第03期);第I138-311页 *

Also Published As

Publication number Publication date
CN106920034A (en) 2017-07-04

Similar Documents

Publication Publication Date Title
Gschwend Zynqnet: An fpga-accelerated embedded convolutional neural network
Li et al. Pipe-SGD: A decentralized pipelined SGD framework for distributed deep net training
CN106920034B (en) Method and system for mining BPMN (business process modeling notation) compilation flow parallelism
CN109740747A (en) Operation method, device and Related product
Wang et al. FPDeep: Scalable acceleration of CNN training on deeply-pipelined FPGA clusters
CN102508816B (en) Configuration method applied to coarse-grained reconfigurable array
DE102021121732A1 (en) Vector Processor Architectures
Xiao et al. Plasticity-on-chip design: Exploiting self-similarity for data communications
Hanif et al. MPNA: A massively-parallel neural array accelerator with dataflow optimization for convolutional neural networks
Sun et al. Application-specific heterogeneous multiprocessor synthesis using extensible processors
CN103049310A (en) Multi-core simulation parallel accelerating method based on sampling
Qi et al. TRIM: A Design Space Exploration Model for Deep Neural Networks Inference and Training Accelerators
US20080092113A1 (en) System and method for configuring a programmable electronic device to include an execution engine
Jordans et al. Instruction-set architecture exploration strategies for deeply clustered vliw asips
Morcel et al. Fpga-based accelerator for deep convolutional neural networks for the spark environment
Sombatsiri et al. A design space exploration method of SOC architecture for CNN-based AI platform
Deng et al. Darwin-s: A reference software architecture for brain-inspired computers
EP4278309A1 (en) Quantum computing system and method
Zhou et al. Pim-dl: Boosting dnn inference on digital processing in-memory architectures via data layout optimizations
Yin et al. Configuration context reduction for coarse-grained reconfigurable architecture
Lyuh et al. An integrated data path optimization for low power based on network flow method
El Hajj Techniques for optimizing dynamic parallelism on graphics processing units
WO2020051918A1 (en) Neuronal circuit, chip, system and method therefor, and storage medium
Wang et al. Architecture-level energy estimation for heterogeneous computing systems
Zhu et al. Pim-hls: An automatic hardware generation tool for heterogeneous processing-in-memory-based neural network accelerators

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20201202

Address after: 650000 No. 300 Bailong temple, Yunnan, Kunming

Applicant after: SOUTHWEST FORESTRY University

Address before: 650000 Chenggong County, Kunming City, Yunnan Province, Yunnan University, Chenggong Campus

Applicant before: YUNNAN University

GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210430