CN112364220B - Business process guiding self-learning optimization method - Google Patents

Business process guiding self-learning optimization method Download PDF

Info

Publication number
CN112364220B
CN112364220B CN202011321941.7A CN202011321941A CN112364220B CN 112364220 B CN112364220 B CN 112364220B CN 202011321941 A CN202011321941 A CN 202011321941A CN 112364220 B CN112364220 B CN 112364220B
Authority
CN
China
Prior art keywords
network
hierarchical network
node
nodes
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011321941.7A
Other languages
Chinese (zh)
Other versions
CN112364220A (en
Inventor
暴利花
杨理想
王银瑞
苏洪全
刘海龙
吕宁
黄宁宁
冯小猛
周祥军
宋丽娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 15 Research Institute
Original Assignee
CETC 15 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 15 Research Institute filed Critical CETC 15 Research Institute
Priority to CN202011321941.7A priority Critical patent/CN112364220B/en
Publication of CN112364220A publication Critical patent/CN112364220A/en
Application granted granted Critical
Publication of CN112364220B publication Critical patent/CN112364220B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/906Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • G06F16/9024Graphs; Linked lists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a business process guiding self-learning optimization method, which specifically comprises the following steps: firstly, organizing service flows based on an excavation mode and a self-adaptive algorithm, classifying the algorithm according to a similarity flow class driven by comprehensive knowledge and classifying the service operation modes by joint learning, and completing the self-adaptive problem of each service flow on the basis of the excavation mode; secondly, based on an operation map and a global service flow intelligent guiding algorithm of data convergence, according to an operation map and a data convergence algorithm of a co-coupled neural network, a service flow intelligent guiding algorithm of shortest path optimization of a complex map structure and a centrality algorithm of a hierarchical network constructed by the map network; and finally, the process self-learning optimization based on the full-link multi-dimensional data composite guidance is completed by combining deep reinforcement learning through the guidance environment construction processing based on the intelligent edge and the background large-center full-link multi-dimensional data. The algorithm can shield unnecessary service fields and realize the service guidance from the task to the point.

Description

Business process guiding self-learning optimization method
Technical Field
The invention belongs to the technical field of business process understanding, process division, business operation mode mining guiding and self-adaptive organization of business processes, and particularly relates to a business process guiding self-learning optimization algorithm.
Background
In the large-scale service network, a large number of similar service flows and function classification exists, and the processing procedures such as understanding of the flows and classification of the flows are needed. In addition, in order to control the service link length, shield unnecessary service areas, and implement the service flow guidance of shortest path optimization for the point-to-point service guidance for the purpose of tasks, there are the following problems: (1) The excavation business process is complicated, the excavation mode is single, and a large amount of manpower, material resources and financial resources are consumed; (2) Because the business operation is separated from data and knowledge, the problems of time and labor consumption of a business process, extremely steep learning curves, excessive times of involvement of users in a business loop and the like are caused.
Disclosure of Invention
In order to solve the technical problems, the invention constructs a virtual guiding environment suitable for deep reinforcement learning aiming at the characteristics of abundant and multidimensional edge calculation and background large-center full-link data, and based on data, knowledge, service and the like, the research flow optimizes a self-learning intelligent model algorithm, realizes rapid iterative trial and error, and finds an optimal service flow guiding path. The technical solution for realizing the purpose of the invention is as follows: a business process guiding self-learning optimization algorithm comprises the following specific steps:
firstly, organizing service flows based on an excavation mode and a self-adaptive algorithm, classifying the algorithm according to a similarity flow class driven by comprehensive knowledge and classifying the service operation modes by joint learning, and completing the self-adaptive problem of each service flow on the basis of the excavation mode;
secondly, based on an operation map and a global service flow intelligent guiding algorithm of data convergence, according to an operation map and a data convergence algorithm of a co-coupled neural network, a service flow intelligent guiding algorithm of shortest path optimization of a complex map structure and a centrality algorithm of a hierarchical network constructed by the map network;
and finally, the process self-learning optimization based on the full-link multi-dimensional data composite guidance is completed by combining deep reinforcement learning through the guidance environment construction processing based on the intelligent edge and the background large-center full-link multi-dimensional data.
As an improvement, the comprehensive knowledge driven similarity flow class classification algorithm comprises the following specific methods:
(1) Establishing a flow model by using a Petri network;
(2) Combining the relative positions and the logic relations of all elements in the flow, and carrying out iterative mapping strategy adjustment on the elements in the flow model, including transition and library, so as to find the association relations of the elements in different flows;
(3) After a stable optimal mapping function is obtained, calculating the similarity coefficient between the whole processes;
(4) By the structural similarity coefficient of the element pairs and the global structural similarity coefficient,
defining business process similarity coefficients, namely providing the general neglect tasks of the structure similarity coefficient ESS and the global structure similarity coefficient SSM of element pairs, wherein the general neglect tasks include but are not limited to the external property differences of text labels, providing specific quantifiable numerical values for logic structures such as concurrence, asynchronism, selection and the like of the processes, and describing the structure similarity degree among elements and among the processes;
(5) In business process similarity measurement: adopting a coarse-granularity calculation method, quantitatively analyzing and calculating the overall similarity degree among all mapping elements on the basis of an optimal mapping function obtained through adjustment through automatic research on business process similarity measurement based on an internal structure, and finally determining a process similarity coefficient between [0,1 ];
(6) Comparing and evaluating different process versions generated in different periods, positioning the mutation position and mutation type, and establishing a management knowledge base for improving the evolution efficiency of the process or promoting different changes to adapt to the change of the environment when facing new changes.
As an improvement, the operation mode of the class-based service of the joint learning is realized by representing each event sequence by a time chart, mining the sequence mode by a sequence mode mining algorithm with time intervals, and finally carrying out clustering division.
As an improvement, the operation spectrum and data aggregation algorithm based on the co-coupled neural network is an operation mechanism for describing service operation by adopting a Bayesian network, and aggregation and fusion expression learning of the operation spectrum, knowledge and data is realized on the basis of a cross training mechanism of parameter sharing;
the nodes comprising the network in the operation mechanism correspond to the conditions and the constraints in the operation mechanism ontology description of the flow service network, the actions of the plan fragments and the entities involved in the plan and the logical relations between the concepts represented by the nodes are corresponding to the edges of the network, and the dependency relations of the plan and the sub-concepts thereof on other actions, the plan fragments, the conditions and the constraints in the operation mechanism ontology description of the flow service network are included; the plan segment in the operation mechanism ontology description contains relationships between entities and actions, and timing relationships in the plan;
as an improvement, the business process intelligent guiding algorithm of the shortest path optimization of the complex graph structure is realized by constructing a hierarchical network through a graph network and adopting an iterative calculation method of the hierarchical network;
when the graph network constructs a hierarchical network, for each level of hierarchical network from low to high, all nodes in the network are used as common nodes and the following iteration is carried out:
(1) selecting a common node with the largest degree as a central node, and aggregating common neighbors of the common node except the super nodes to form a super node;
(2) reconstructing the connecting edges among the nodes, and directing the connecting edges pointing to the members in the super nodes to the super nodes, wherein a plurality of connecting edges among the same node pairs are combined into one connecting edge;
(3) continuing the aggregation process until all nodes in the network are aggregated into super nodes, and ending the aggregation iteration process of the current hierarchical network; at this time, all the super nodes obtained by aggregation are used as common nodes in the next-level hierarchical network, so that the scale of the next-level hierarchical network is reduced;
(4) if the number of nodes in the current hierarchical network is lower than a certain threshold, stopping the iterative aggregation process and constructing the hierarchical network, wherein the obtained network is the highest hierarchical network.
As an improvement, the iterative calculation method of the hierarchical network continuously aggregates the central nodes and the neighbors thereof of the hierarchical network by constructing the hierarchical network until the original network is converted into a highest-level hierarchical network with small specification, and iteratively calculates the shortest path between any two points in the original network by using the approximate distance from the node to the central node in each level of hierarchical network on the basis of the hierarchical network;
definition of the definition
Figure SMS_1
For the approximate distance of nodes s to t in the i-th hierarchical network, the shortest path distance d= (s, t) of nodes s to t in the original network may be defined by the approximate distance +.>
Figure SMS_2
Obtained, distance->
Figure SMS_3
Can be defined by the approximate distance +.>
Figure SMS_4
Iterative calculation (wherein i is equal to or greater than 0) to obtain approximate distance +.>
Figure SMS_5
Is an iterative calculation method of (a):
Figure SMS_6
c in the above formula s And C t Is the central node of nodes s and t,
Figure SMS_7
representing the approximate distance of node s from its center node c.
As an improvement, approximate distance
Figure SMS_8
In the process of constructing the hierarchical network, the algorithm selects a common node with the largest degree as a central node and aggregates common neighbors thereof,nodes other than the super node constitute a super node which will serve as a common node in the next-level hierarchical network.
According to the approximate distance from node s to central node c in the i-th hierarchical network
Figure SMS_9
Defining a radius r of a node in the i+1st hierarchical network i+1 The method comprises the steps of carrying out a first treatment on the surface of the When the radius of a node in the level 0 hierarchical network is 0, the distance from the node to the center node is 1; the radius of the node in the level 1 hierarchical network is 1, and the distance from the node to the central node is 3; and so on,
center node of i-th level hierarchical network
Figure SMS_10
And->
Figure SMS_11
Respectively correspond to the common nodes s in the i+1st level hierarchical network i+1 And t i+1 The radius of the node in the i-th hierarchical network is r i Radius r of node in i+1st level hierarchical network i+1 =2r i +1. Then approximately obtaining the mean value r of the node radius in the ith hierarchical network i
Figure SMS_12
Where k represents the number of different scale networks in the hierarchical network, according to the approximate distance of node s to the central node c in the ith hierarchical network
Figure SMS_13
Is calculated according to the formula:
Figure SMS_14
by combining the two formulas, we can further obtain the approximate distance
Figure SMS_15
Is defined by the iterative calculation formula:
Figure SMS_16
wherein d k-1 (s, t) represents that the approximate distance between nodes s and t in the highest level hierarchical network is equal to the actual distance calculated by Dijkstra's algorithm. The margin between nodes s and t in the highest level hierarchical network is approximately 2r k +1=4*k-5。
As an improvement, the centrality algorithm of the hierarchical network of the graph network structure comprises a near centrality algorithm and a medium value calculation method; the approach centrality algorithm is to calculate the shortest path distance sum from all nodes to other nodes in the network by using a hierarchical network and an iterative calculation method on the basis of the shortest path approximation algorithm, so as to obtain the approach value of all nodes; sequencing all nodes in the network, and obtaining a result which is an algorithm of the node in the network, wherein the algorithm is close to the centrality;
the medium value calculation method is that firstly, iterative aggregation is carried out on an original network through a construction hierarchical network until a highest hierarchical network is obtained; and calculating the shortest paths among the nodes in the highest-level hierarchical network by using a Dijkstra algorithm, and calculating the number of shortest paths passing through the nodes in the highest-level hierarchical network by using the shortest paths.
As an improvement, a guiding environment is constructed based on intelligent edge and background large center full-link multidimensional data, and the specific method is as follows:
according to the definition of the value function, the value function of the defined strategy pi is as follows:
Figure SMS_17
wherein R(s) represents an unknown return function, which is generally a function of a state, and because the return function is unknown, the return function is subjected to parameter approximation by a function approximation method, and the approximation form can be phi(s), which is a basic function, a polynomial substrate or a Fourier substrate. The inverse reinforcement learning is the coefficient w in the return function.
The defining characteristics are expected to be:
Figure SMS_18
given m expert trajectories, the expert strategy is characterized by the following expectations:
Figure SMS_19
finding a strategy, so that the performance of the strategy is similar to that of an expert strategy; by using characteristic expectations to represent the quality of a strategy, when the following inequality is calculated to be true, a strategy is found to behave similar to an expert strategy
Figure SMS_20
When the inequality is established, for arbitrary weights w 1 And less than or equal to 1, the value function satisfies the following inequality:
Figure SMS_21
the normalized form of the objective function is:
Figure SMS_22
s.t.w T μ E ≥w T μ (j) +t,j=0,…,i-1
||w|| 2 ≤1
expert strategies are one class, other strategies are the other class, and the solving of parameters is to find a hypersurface to distinguish the expert strategy from the other strategies, and the hypersurface maximizes the margin between the two classes.
The beneficial effects are that: the invention provides a business process guiding self-learning optimization algorithm, which utilizes comprehensive knowledge to carry out category division of the combination of coarse granularity and fine granularity on business processes in a large-scale business network, and adopts a simplified representation mode aiming at a collection of business operations with similarity; meanwhile, combining the business requirements under the specific scene, fusing and distributing priori knowledge and scene specific data into the business process, and meeting the basic operation mode; and guiding the specific business flow under the global view angle, so that the optimal operation path and the minimum cost are realized. And combining business operations bound with knowledge and data, searching an optimal path in a large-scale graph structure, controlling the length of a business link, shielding unnecessary business fields and realizing point-to-point business guidance aiming at tasks.
Detailed Description
The invention is further described below with reference to examples.
A business process guiding self-learning optimization algorithm comprises the following specific steps:
firstly, organizing service flows based on an excavation mode and a self-adaptive algorithm, classifying the algorithm according to a similarity flow class driven by comprehensive knowledge and classifying the service operation modes by joint learning, and completing the self-adaptive problem of each service flow on the basis of the excavation mode;
secondly, based on an operation map and a global service flow intelligent guiding algorithm of data convergence, according to an operation map and a data convergence algorithm of a co-coupled neural network, a service flow intelligent guiding algorithm of shortest path optimization of a complex map structure and a centrality algorithm of a hierarchical network constructed by the map network;
and finally, the process self-learning optimization based on the full-link multi-dimensional data composite guidance is completed by combining deep reinforcement learning through the guidance environment construction processing based on the intelligent edge and the background large-center full-link multi-dimensional data.
Business process organization and self-adaptive technology realization based on mining mode
(1) Similarity flow class division algorithm implementation based on comprehensive knowledge driving
1) The method uses a Petri network to establish a flow model;
2) Combining the relative positions and logic relations of all elements in the flow, and carrying out iterative mapping strategy adjustment on the elements (Transition and library) in the flow model respectively so as to find the association relations of the elements in different flows;
3) Finally, calculating the similarity coefficient between the whole processes after obtaining an optimal mapping function which tends to be stable;
4) Defining the business process similarity coefficient, namely providing the concepts of the structural similarity coefficient ESS and the global structural similarity coefficient SSM of the element pairs. Ignoring the extrinsic differences of tasks (such as text labels), giving specific quantifiable values for concurrent, asynchronous, selective and other logic structures of the flow, and describing the structural similarity between elements and between flows.
Element structure similarity coefficient, element Structure Similarity): given two Petri net modeled flow models m1= (P) 1 ,T 1 ,A 1 ) And M is as follows 2 =(P 2 ,T 2 ,A 2 ) Observe M 1 Element P (p.epsilon.P) 1 ∩T 1 ) And M is as follows 2 Element q (q.epsilon.P) 2 ∩T 2 ) The similarity coefficients of p and q are expressed as:
Figure SMS_23
S pq representing the similarity of element p to element q. L (L) p Represents the number of p corresponding sets, L q Represents the number of q corresponding sets, R p Represents the number of p corresponding sets, R q Analysis of the similarity of p and q, representing the number of sets corresponding to q, requires simultaneous consideration of the business processes M 1 ,M 2 Is the same as or different from the other. S is S pq Is determined by the left and right flow structures closely adjacent to q and p, and is associated with the number of intersections and the structural similarity ratio established on the mapping function. Similarity of left side structure is defined by
Figure SMS_24
The difference is represented by gamma pq The similarity of the right-hand structure is indicated by +.>
Figure SMS_25
The difference is represented by delta pq And (3) representing. α, β represent weights occupied by the left and right structures in the similarity coefficients, respectively, and are preferably each analyzed using a homonymous weight of α=0.5, β=0.5.
Global structural similarity factor, structure Similarity Modulus) defining the similarity factor between flows M1 and M2 to be [0,1]]And, the above is expressed as:
Figure SMS_26
wherein the larger the S value, the higher the similarity of this pair of flows, and the best library mapped set is denoted as f P The optimal transition mapping set is denoted as f T . Where η, θ is expressed as the weight of the library and transition, respectively, if more conscious is similar in the flow, then a greater value should be assigned to θ, otherwise a greater value should be assigned to η. In this paper, the analysis is performed using a homonymic weight of η=0.5, θ=0.5 for ease of presentation. m, n are each f p ,f T Is a length of (c).
5) On the basis of the optimal mapping function obtained through automatic research based on the business process similarity measurement of the internal structure, the overall similarity degree among all mapping elements is quantitatively analyzed and calculated, and finally a process similarity coefficient between [0,1] is determined. Comparing and evaluating different process versions generated in different periods, positioning the mutation position and mutation type, and establishing a management knowledge base, so that the evolution efficiency of the process can be improved or different changes can be promoted to adapt to the change of the environment when new changes are carried out in the future.
(2) Class-based business operation mode mining implementation based on joint learning
1) Each event sequence is represented by a time graph;
2) A sequence pattern mining algorithm with time intervals to mine sequence patterns;
3) And (5) clustering and dividing.
(3) Service flow self-adaptive organization algorithm implementation based on mining mode
(II) implementation of global business process intelligent guide technology based on operation diagram and data aggregation
(1) Operation map and data convergence algorithm implementation based on co-coupled neural network
1) Operation mechanism for describing business operation by Bayesian network
1.1 Nodes of the network correspond to conditions and constraints in the flow business network operating mechanism ontology description, and concepts such as plan segments and actions of entities involved in the plan. Wherein, these concepts are all added with a priori probabilities P (C|SupC) obtained by experience, statistics or subjective judgment, and C represents concepts or class nodes in the network.
1.2 The edges of the network correspond to logical relationships between concepts represented by the nodes, including the dependency of plans and their sub-concepts in the flow business network operation mechanism ontology description on other actions, plan fragments, conditions and constraints; the plan segment-to-entity and action in the operating mechanism ontology description includes relationships, relationships between actions and sub-actions, and timing relationships in the plan. The existence of these relationships provides a conditional probability P (c|supc), which represents the node's upper level node.
2) The cross training mechanism based on parameter sharing realizes the convergence of the operation map, knowledge and data, and fusion representation learning 2.1) replaces U in the basic model by using the fusion representation vector matrix U w And U s Establishing a co-coupled nerve;
2.2 The left and right parts represent the learning model to train alternately, and U is shared by the two models, namely, the U is transmitted mutually in the training process;
2.3 And (3) repeatedly iterating to obtain node vector representation fusing the two-aspect information and obtain a corresponding maximized objective function.
(2) Implementation of intelligent business process guidance algorithm based on shortest path optimization of complex graph structure
1) Constructing a hierarchical network based on a graph network
For each level of hierarchical network from low to high, all nodes in the network are used as common nodes and the following iteration is carried out:
1.1 Selecting a common node with the largest degree as a central node, and aggregating common neighbors (nodes except the super node) of the common node to form a super node;
1.2 Reconstructing the connecting edges among the nodes, and directing the connecting edges pointing to the members in the super node to the super node, wherein a plurality of connecting edges among the same node pairs are combined into one connecting edge;
1.3 Continuing the aggregation process until all nodes in the network are aggregated into super nodes, and ending the aggregation iteration process of the current hierarchical network; at this time, all the super nodes obtained by aggregation are used as common nodes in the next-level hierarchical network, so that the scale of the next-level hierarchical network is reduced;
1.4 If the number of nodes in the current hierarchical network is lower than a certain threshold value, stopping the iterative aggregation process and constructing the hierarchical network, wherein the obtained network is the highest hierarchical network.
2) Iterative calculation method for constructing hierarchical network
The algorithm continuously aggregates the central nodes of the network and their neighbors by constructing a hierarchical network until the original network is converted to a very small-scale highest level hierarchical network. The iterative calculation method is based on the hierarchical network, and the shortest path between any two points in the original network is calculated iteratively by using the approximate distance from the node to the central node in each level of hierarchical network.
Definition of the definition
Figure SMS_27
For the approximate distance of nodes s to t in the i-th hierarchical network, the shortest path distance d= (s, t) of nodes s to t in the original network may be defined by the approximate distance +.>
Figure SMS_28
Obtained, distance->
Figure SMS_29
Can be defined by the approximate distance +.>
Figure SMS_30
Iterative calculation to obtain (i is not less than0) Thus we get an approximate distance
Figure SMS_31
Is an iterative calculation method of (a):
Figure SMS_32
c in the above formula s And C t Is the central node of nodes s and t,
Figure SMS_33
representing the approximate distance of node s from its center node c.
In the process of constructing the hierarchical network, the algorithm selects a common node with the largest degree as a central node, and aggregates its common neighbors (nodes except the super node) to form a super node, wherein the super node is used as a common node in the next-level hierarchical network. We can rely on the approximate distance of node s to the central node c in the level i hierarchical network
Figure SMS_34
Defining a radius r of a node in the i+1st hierarchical network i+1 . If the radius of the node in the level 0 hierarchical network is 0, the distance from the node to the central node is 1; the radius of a node in a level 1 hierarchical network is 1 and the distance from the node to the center node is 3. And so on.
Center node of i-th level hierarchical network
Figure SMS_35
And->
Figure SMS_36
Respectively correspond to the common nodes s in the i+1st level hierarchical network i+1 And t i+1 The radius of the node in the i-th hierarchical network is r i Radius r of node in i+1st level hierarchical network i+1 =2r i +1. We can then approximate the mean value r of the node radius in the i-th hierarchical network i
Figure SMS_37
Where k represents the number of different scale networks in the hierarchical network, according to the approximate distance of node s to the central node c in the ith hierarchical network
Figure SMS_38
Is calculated according to the formula:
Figure SMS_39
by combining the two formulas, we can further obtain the approximate distance
Figure SMS_40
Is defined by the iterative calculation formula:
Figure SMS_41
wherein d k-1 (s, t) represents that the approximate distance between nodes s and t in the highest level hierarchical network is equal to the actual distance calculated by Dijkstra's algorithm. The margin between nodes s and t in the highest level hierarchical network may be approximated as 2r k +1=4*k-5。
(3) Centrality algorithm implementation of hierarchical network based on graph network construction
1) Near centrality algorithm implementation
1.1 On the basis of a shortest path approximation algorithm, calculating the shortest path distance sum from all nodes to other nodes in the network by using a hierarchical network and an iterative calculation method, thereby obtaining the approach values of all nodes;
1.2 Ordering all nodes in the network, the result is the proximity centrality of the nodes in the network.
2) Medium value calculation method
2.1 Performing iterative aggregation on the original network by constructing a hierarchical network until the highest hierarchical network is obtained;
2.2 The Dijkstra algorithm is utilized to calculate the shortest paths among the nodes in the highest-level hierarchical network, and the shortest path number passing through the nodes in the highest-level hierarchical network is calculated through the shortest paths.
(III) implementation of flow self-learning algorithm based on full-link multidimensional data composite guidance
(1) Guidance environment construction based on intelligent edge and background large-center full-link multidimensional data
And constructing a virtual flow guiding environment by combining full-link operation data by using an inverse reinforcement learning technology.
According to the definition of the value function, the value function of the strategy pi is:
Figure SMS_42
the defining characteristics are expected to be:
Figure SMS_43
it should be noted that the feature expectations are related to policies pi, and that policies are different when policies are different.
After giving m expert trajectories, we can estimate, by definition, the characteristic expectations of the expert strategy as:
Figure SMS_44
a policy is found such that the policy behaves similar to an expert policy. The feature expectations can be used for expressing the quality of a strategy, a strategy is found to be similar to the expert strategy, and the feature expectations of the strategy are found to be similar to the feature expectations of the expert strategy, even if the following inequality is satisfied:
Figure SMS_45
when the inequality is established, for arbitrary weights w 1 And less than or equal to 1, the value function satisfies the following inequality:
Figure SMS_46
the normalized form of the objective function is:
Figure SMS_47
s.t.w T μ E ≥w T μ (j) +t,j=0,…,i-1
||w|| 2 ≤1
expert strategies are one class, other strategies are the other class, and the solving of parameters is to find a hypersurface to distinguish the expert strategy from the other strategies, and the hypersurface maximizes the margin between the two classes.
(2) By combining deep reinforcement learning and a virtual business process environment constructed based on full-link data, self-learning and self-optimizing of business process guidance can be realized.
The above examples illustrate only a few embodiments of the invention, which are described in detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (5)

1. A business process guiding self-learning optimizing method is characterized in that: the method comprises the following specific steps:
firstly, organizing service flows based on an excavation mode and a self-adaptive algorithm, classifying the algorithm according to a similarity flow class driven by comprehensive knowledge and classifying the service operation modes by joint learning, and completing the self-adaptive problem of each service flow on the basis of the excavation mode;
secondly, based on an operation map and a global service flow intelligent guiding algorithm of data convergence, according to an operation map and a data convergence algorithm of a co-coupled neural network, a service flow intelligent guiding algorithm of shortest path optimization of a complex map structure and a centrality algorithm of a hierarchical network constructed by the map network;
finally, the process self-learning optimization based on the full-link multi-dimensional data composite guidance is completed by combining deep reinforcement learning through the guidance environment construction processing based on the intelligent edge and the background large-center full-link multi-dimensional data;
the comprehensive knowledge driven similarity flow class division algorithm comprises the following specific steps:
(1) Establishing a flow model by using a Petri network;
(2) Combining the relative positions and the logic relations of all elements in the flow, and carrying out iterative mapping strategy adjustment on the elements in the flow model, including transition and library, so as to find the association relations of the elements in different flows;
(3) After a stable optimal mapping function is obtained, calculating the similarity coefficient between the whole processes;
(4) By the structural similarity coefficient of the element pairs and the global structural similarity coefficient,
defining business process similarity coefficients, namely providing concepts of a structural similarity coefficient ESS and a global structural similarity coefficient SSM of element pairs, neglecting task forms including but not limited to external property differences of text labels, giving specific quantifiable numerical values for concurrent, asynchronous and selective logic structures of the processes, and describing the structural similarity degree between elements and between the processes;
(5) In business process similarity measurement: adopting a coarse-granularity calculation method, quantitatively analyzing and calculating the overall similarity degree among all mapping elements on the basis of an optimal mapping function obtained through adjustment through automatic research on business process similarity measurement based on an internal structure, and finally determining a process similarity coefficient between [0,1 ];
(6) Comparing and evaluating different process versions generated in different periods, positioning the mutation position and mutation type, and establishing a management knowledge base for improving the evolution efficiency of the process or promoting different changes to adapt to the change of the environment when facing new changes;
the operation spectrum and data aggregation algorithm based on the co-coupled neural network is an operation mechanism for describing service operation by adopting a Bayesian network, and aggregation and fusion expression learning of the operation spectrum, knowledge and data is realized on the basis of a cross training mechanism of parameter sharing;
the nodes comprising the network in the operation mechanism correspond to the conditions and the constraints in the operation mechanism ontology description of the flow service network, the actions of the plan fragments and the entities involved in the plan and the logical relations between the concepts represented by the nodes are corresponding to the edges of the network, and the dependency relations of the plan and the sub-concepts thereof on other actions, the plan fragments, the conditions and the constraints in the operation mechanism ontology description of the flow service network are included; the plan segment in the operation mechanism ontology description contains relationships between entities and actions, and timing relationships in the plan;
the business process intelligent guiding algorithm for optimizing the shortest path of the complex graph structure is realized by constructing a hierarchical network through a graph network and adopting an iterative calculation method of the hierarchical network;
when the graph network constructs a hierarchical network, for each level of hierarchical network from low to high, all nodes in the network are used as common nodes and the following iteration is carried out:
(1) selecting a common node with the largest degree as a central node, and aggregating common neighbors of the common node except the super nodes to form a super node;
(2) reconstructing the connecting edges among the nodes, and directing the connecting edges pointing to the members in the super nodes to the super nodes, wherein a plurality of connecting edges among the same node pairs are combined into one connecting edge;
(3) continuing the aggregation process until all nodes in the network are aggregated into super nodes, and ending the aggregation iteration process of the current hierarchical network; at this time, all the super nodes obtained by aggregation are used as common nodes in the next-level hierarchical network, so that the scale of the next-level hierarchical network is reduced;
(4) if the number of nodes in the current hierarchical network is lower than a certain threshold value, stopping the iterative aggregation process and the construction of the hierarchical network, wherein the obtained network is the highest hierarchical network;
the centrality algorithm of the hierarchical network of the graph network structure comprises a near centrality algorithm and a medium value calculation method; the approach centrality algorithm is to calculate the shortest path distance sum from all nodes to other nodes in the network by using a hierarchical network and an iterative calculation method on the basis of the shortest path approximation algorithm, so as to obtain the approach value of all nodes; sequencing all nodes in the network, and obtaining a result which is an algorithm of the node in the network, wherein the algorithm is close to the centrality;
the medium value calculation method is that firstly, iterative aggregation is carried out on an original network through a construction hierarchical network until a highest hierarchical network is obtained; and calculating the shortest paths among the nodes in the highest-level hierarchical network by using a Dijkstra algorithm, and calculating the number of shortest paths passing through the nodes in the highest-level hierarchical network by using the shortest paths.
2. The business process guided self-learning optimization method of claim 1, wherein: and the operation mode of the combined learning classification business is realized by representing each event sequence by a time chart, mining the sequence modes by a sequence mode mining algorithm with time intervals and finally carrying out clustering division.
3. The business process guided self-learning optimization method of claim 1, wherein: according to the iterative calculation method of the hierarchical network, central nodes and neighbors thereof of the hierarchical network are continuously aggregated by constructing the hierarchical network until an original network is converted into a highest-level hierarchical network with a small specification, and on the basis of the hierarchical network, the shortest path between any two points in the original network is calculated in an iterative mode by utilizing the approximate distance from the nodes to the central nodes in each level of the hierarchical network;
definition of the definition
Figure QLYQS_1
For the approximate distance of the nodes s to t in the i-th level hierarchical network, the shortest path distance d= (s, t) of the nodes s to t in the original network is represented by the approximate distance +_ of the 0-level hierarchical network>
Figure QLYQS_2
Obtained, distance->
Figure QLYQS_3
From the approximate distance +.>
Figure QLYQS_4
Iterative calculation (wherein i is equal to or greater than 0) to obtain approximate distance +.>
Figure QLYQS_5
Is an iterative calculation method of (a):
Figure QLYQS_6
c in the above formula s And c t Is the central node of nodes s and t,
Figure QLYQS_7
representing the approximate distance of node s from its center node c.
4. The business process guided self-learning optimization method of claim 3, wherein: the above approximate distance
Figure QLYQS_8
The iterative calculation method comprises the following specific processes:
in the process of constructing the hierarchical network, an algorithm selects a common node with the largest degree as a central node, aggregates common neighbors of the common node, and forms a super node except the super node, wherein the super node is used as a common node in the next-level hierarchical network;
according to the approximate distance from node s to central node c in the i-th hierarchical network
Figure QLYQS_9
To define the (i+1) th stageRadius r of node in hierarchical network i+1 The method comprises the steps of carrying out a first treatment on the surface of the When the radius of a node in the level 0 hierarchical network is 0, the distance from the node to the center node is 1; the radius of the node in the level 1 hierarchical network is 1, and the distance from the node to the central node is 3; and so on,
center node of i-th level hierarchical network
Figure QLYQS_10
And->
Figure QLYQS_11
Respectively correspond to the common nodes s in the i+1st level hierarchical network i+1 And t i+1 The radius of the node in the i-th hierarchical network is r i Radius r of node in i+1st level hierarchical network i+1 =2r i +1, thus approximating the mean value r of the node radius in the ith hierarchical network i
Figure QLYQS_12
Where k represents the number of different scale networks in the hierarchical network, the approximate distance of node s to the central node c in the ith hierarchical network
Figure QLYQS_13
Is calculated according to the formula:
Figure QLYQS_14
combining the two formulas to obtain the approximate distance
Figure QLYQS_15
Is defined by the iterative calculation formula:
Figure QLYQS_16
wherein d k-1 (s, t) is shown inThe approximate distance between the nodes s and t in the highest-level hierarchical network is equal to the actual distance calculated by the Dijkstra algorithm; the margin between nodes s and t in the highest level hierarchical network is approximately 2r k +1=4*k-5。
5. The business process guided self-learning optimization method of claim 1, wherein: the method for constructing the guiding environment based on the intelligent edge and background large-center full-link multidimensional data comprises the following steps:
according to the definition of the value function, the value function of the defined strategy pi is as follows:
Figure QLYQS_17
wherein R(s) represents an unknown return function, parameter approximation is carried out on the unknown return function by using a function approximation method, and the approximation form can be phi(s), which is a basis function, a polynomial basis or a Fourier basis; the reverse reinforcement learning is to calculate the coefficient w in the return function;
the defining characteristics are expected to be:
Figure QLYQS_18
given m expert trajectories, the expert strategy is characterized by the following expectations:
Figure QLYQS_19
finding a strategy, so that the performance of the strategy is similar to that of an expert strategy; by using characteristic expectations to represent the quality of a strategy, when the following inequality is calculated to be true, a strategy is found to behave similar to an expert strategy
Figure QLYQS_20
When the inequality is established, for arbitrary weights w 1 And less than or equal to 1, the value function satisfies the following inequality:
Figure QLYQS_21
the normalized form of the objective function is:
Figure QLYQS_22
s.t.w T μ E ≥w T μ (j) +t,j=0,…,i-1
||w|| 2 ≤1
expert strategies are one class, other strategies are the other class, and the solving of parameters is to find a hypersurface to distinguish the expert strategy from the other strategies, and the hypersurface maximizes the margin between the two classes.
CN202011321941.7A 2020-11-23 2020-11-23 Business process guiding self-learning optimization method Active CN112364220B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011321941.7A CN112364220B (en) 2020-11-23 2020-11-23 Business process guiding self-learning optimization method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011321941.7A CN112364220B (en) 2020-11-23 2020-11-23 Business process guiding self-learning optimization method

Publications (2)

Publication Number Publication Date
CN112364220A CN112364220A (en) 2021-02-12
CN112364220B true CN112364220B (en) 2023-07-11

Family

ID=74534194

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011321941.7A Active CN112364220B (en) 2020-11-23 2020-11-23 Business process guiding self-learning optimization method

Country Status (1)

Country Link
CN (1) CN112364220B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114357029B (en) * 2022-01-04 2022-09-02 工银瑞信基金管理有限公司 Method, device, equipment and medium for processing service data

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109688056A (en) * 2018-12-07 2019-04-26 南京理工大学 Intelligent Network Control System and method
CN109684471A (en) * 2018-12-29 2019-04-26 上海晏鼠计算机技术股份有限公司 A kind of application method of innovative AI intelligent text processing system in new retail domain

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8175991B2 (en) * 2008-01-31 2012-05-08 Ca, Inc. Business optimization engine that extracts process life cycle information in real time by inserting stubs into business applications

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109688056A (en) * 2018-12-07 2019-04-26 南京理工大学 Intelligent Network Control System and method
CN109684471A (en) * 2018-12-29 2019-04-26 上海晏鼠计算机技术股份有限公司 A kind of application method of innovative AI intelligent text processing system in new retail domain

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于邻节点和关系模型优化的网络表示学习;冶忠林;赵海兴;张科;朱宇;肖玉芝;;计算机研究与发展(12);全文 *

Also Published As

Publication number Publication date
CN112364220A (en) 2021-02-12

Similar Documents

Publication Publication Date Title
Alves et al. A review of interactive methods for multiobjective integer and mixed-integer programming
Nayak et al. 25 years of particle swarm optimization: Flourishing voyage of two decades
Zhao et al. A surrogate-assisted multi-objective evolutionary algorithm with dimension-reduction for production optimization
CN104951425B (en) A kind of cloud service performance self-adapting type of action system of selection based on deep learning
Zhao et al. A classification-based surrogate-assisted multiobjective evolutionary algorithm for production optimization under geological uncertainty
Li et al. A survey on firefly algorithms
Ahmadianfar et al. Extracting optimal policies of hydropower multi-reservoir systems utilizing enhanced differential evolution algorithm
Tang et al. Clustering big IoT data by metaheuristic optimized mini-batch and parallel partition-based DGC in Hadoop
CN109726228A (en) A kind of Cutting data integrated application method under big data background
CN102945283B (en) A kind of semantic Web service combination method
Zhao et al. Decomposition-based evolutionary algorithm with automatic estimation to handle many-objective optimization problem
Zeng et al. Whale swarm algorithm with the mechanism of identifying and escaping from extreme points for multimodal function optimization
Wang et al. A multi-objective binary harmony search algorithm
CN112364220B (en) Business process guiding self-learning optimization method
Basto-Fernandes et al. A survey of diversity oriented optimization: Problems, indicators, and algorithms
Liang et al. Surrogate-assisted Phasmatodea population evolution algorithm applied to wireless sensor networks
Kalifullah et al. Retracted: Graph‐based content matching for web of things through heuristic boost algorithm
Pelikan et al. Getting the best of both worlds: Discrete and continuous genetic and evolutionary algorithms in concert
Korejo et al. Multi-population methods with adaptive mutation for multi-modal optimization problems
Michelakos et al. A hybrid classification algorithm evaluated on medical data
Yu et al. Community detection in the textile-related trade network using a biased estimation of distribution algorithm
Kannimuthu et al. Discovery of interesting itemsets for web service composition using hybrid genetic algorithm
Gao et al. An efficient evolutionary algorithm based on deep reinforcement learning for large-scale sparse multiobjective optimization
Hodashinsky Methods for improving the efficiency of swarm optimization algorithms. A survey
CN108829846A (en) A kind of business recommended platform data cluster optimization system and method based on user characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210310

Address after: 210000 rooms 1201 and 1209, building C, Xingzhi Science Park, Qixia Economic and Technological Development Zone, Nanjing, Jiangsu Province

Applicant after: Nanjing Xingyao Intelligent Technology Co.,Ltd.

Address before: Room 1211, building C, Xingzhi Science Park, 6 Xingzhi Road, Nanjing Economic and Technological Development Zone, Jiangsu Province, 210000

Applicant before: Nanjing Shixing Intelligent Technology Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210420

Address after: 100000 No. 211 middle Fourth Ring Road, Haidian District, Beijing

Applicant after: NO.15 INSTITUTE OF CHINA ELECTRONICS TECHNOLOGY Group Corp.

Address before: 210000 rooms 1201 and 1209, building C, Xingzhi Science Park, Qixia Economic and Technological Development Zone, Nanjing, Jiangsu Province

Applicant before: Nanjing Xingyao Intelligent Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant