CN111142867B - Service visual arrangement system and method under micro service architecture - Google Patents

Service visual arrangement system and method under micro service architecture Download PDF

Info

Publication number
CN111142867B
CN111142867B CN201911416158.6A CN201911416158A CN111142867B CN 111142867 B CN111142867 B CN 111142867B CN 201911416158 A CN201911416158 A CN 201911416158A CN 111142867 B CN111142867 B CN 111142867B
Authority
CN
China
Prior art keywords
flow
api
service
node
execution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911416158.6A
Other languages
Chinese (zh)
Other versions
CN111142867A (en
Inventor
陆才慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guyun Technology Guangzhou Co ltd
Original Assignee
Guyun Technology Guangzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guyun Technology Guangzhou Co ltd filed Critical Guyun Technology Guangzhou Co ltd
Priority to CN201911416158.6A priority Critical patent/CN111142867B/en
Publication of CN111142867A publication Critical patent/CN111142867A/en
Application granted granted Critical
Publication of CN111142867B publication Critical patent/CN111142867B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/34Graphical or visual programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/448Execution paradigms, e.g. implementations of programming paradigms
    • G06F9/4482Procedural

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention relates to the field of micro-service application programs, in particular to a service visualization arrangement system and method under a micro-service architecture. The system comprises a scheduling process monitoring module for visually displaying the running state of the process; the visual flow scheduling module is used for scheduling the API combination into a new flow; an API service management module for reissuing the arranged flow as a new API and managing the flow; the rule management module is used for carrying out unified rule creation and management on business logic processing required in the process of flow arrangement; the scheduling frequency configuration module is used for setting the flow operation frequency for the scheduled flow and scheduling the scheduled flow; and the flow execution engine module is used for carrying out node construction and pushing execution in the memory according to the arranged node flow, and finally outputting a flow execution result to the calling end. The invention can realize the visual arrangement and monitoring of a large number of API interfaces of the micro-service, and directly construct an arrangement node in the memory for execution and scheduling.

Description

Service visual arrangement system and method under micro service architecture
Technical Field
The invention relates to the field of micro-service application programs, in particular to a service visualization arrangement system and method under a micro-service architecture.
Background
Micro-service architecture is a new technology for deploying applications and services in the cloud. The micro-service may run in a "self-program" and communicate with the HTTP API through a "lightweight device". The key is that the service can run in its own program. By this we can distinguish service disclosures from micro-service frameworks (distributing an API in existing systems). In service disclosure, many services may be limited by internal independent processes. If any one of the services needs to add a certain function, the process must be narrowed. In a micro-service architecture, only the required functionality needs to be added to a particular service, without affecting the overall process architecture.
The API (Application Programming Interface, application program interface) is a number of predefined functions or conventions that refer to the engagement of different components of a software system. The objective is to provide the application and developer the ability to access a set of routines based on some software or hardware without having to access the native code or understand the details of the internal operating mechanisms.
The micro-service orchestration refers to the process of visually orchestrating the developed micro-service API interfaces (Restful, webService, dubbo, gRPC, etc.) according to certain business logic and flow, and the micro-service orchestration platform constructs a flow scheduling engine internally to automatically schedule or reassemble the flow scheduling engine into a new micro-service API to issue.
The developed API service can be recombined and reconstructed without any code through the micro service arrangement, the multiplexing efficiency of the API service can be improved to realize agile delivery of foreground service or service system integration, the service system, data and service logic can be decoupled through the micro service arrangement platform, the arrangement of the service logic is completed by a special micro service arrangement platform, and the API service only needs to concentrate on completing the logic inside.
With the development of micro-service architecture, business systems based on the micro-service architecture are increasing, and the micro-service architecture generates a large number of API service interfaces and interactions between the interfaces. The main processing mode of the developer at present realizes the combination and the call among a plurality of APIs in a coding mode, and has the problems that the call monitoring among interfaces with low coding efficiency is difficult and the active recovery cannot be realized when faults occur.
Meanwhile, tools and systems for realizing the arrangement of the API service interfaces based on a workflow mode are also provided, the existing tool system is realized by strongly relying on a persistent SQL database technology, the state change and the promotion of the API nodes are realized based on a table structure of a database, the arranged nodes are required to persist state data into the database in real time in the operation process, and the arrangement system based on the technology has the concurrent performance problem, because a large part of the parallel execution nodes depend on the read-write performance of the database, a large number of APIs can be simultaneously arranged in a complex flow along with the promotion and the use of a micro-service architecture, and the distributed system generally has high requirements on performance and has high concurrent characteristics.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a service visualization arrangement system and a service visualization arrangement method under a micro-service architecture, which can realize the visualization arrangement and monitoring of a large number of API interfaces of the micro-service when in application, and directly construct an arrangement node in a memory for execution and scheduling, wherein the arrangement API is independent of a database in the running process.
The technical scheme adopted by the invention is as follows:
the service visual arrangement system under the micro service architecture comprises an arrangement flow monitoring module for monitoring the flow in scheduling and running and visually displaying and replaying the running state of the flow; the visual process arranging module is used for arranging a new process by adopting a visual dragging, pulling and dragging mode on the API; the API service management module is used for reissuing the well-arranged flow as a new API and managing and testing the newly issued API; the rule management module is used for carrying out unified rule creation and management on business logic processing required in the process of flow arrangement; the scheduling frequency configuration module is used for setting flow operation frequency for the scheduled flow and scheduling the scheduled flow according to the flow operation frequency; and the flow execution engine module is used for carrying out node construction and pushing execution in the memory according to the arranged node flow, and finally outputting a flow execution result to the calling end.
As an optimization of the above technical solution, the flow information monitored by the scheduling flow monitoring module includes flow operation condition statistics information, operation failure flow information, normal ending flow information and flow information to be compensated; the visual flow arranging module combines and arranges flows comprising flow names, flow numbers, node names, node types, API URL addresses, API calling methods, API result assertions and API input parameters; the test management content of the API service management module comprises the name of the API, the URL address issued by the API, the authority of the API, the request mode of the API and the Doc document of the API; the management content of the rule management module comprises a rule name, a rule number, a rule visible range and a rule logic code; the scheduling of the scheduling frequency configuration module comprises a scheduling name, a scheduling time expression and a scheduling available state; the execution content of the flow execution engine module comprises flow starting, flow suspension, flow waiting, flow recovery, flow compensation and node driving.
A service visualization arrangement method under a micro service architecture comprises the following steps:
s1, carrying out graphical process arrangement on an API service;
s2, constructing a flow instance object corresponding to the arranged flow in the memory;
s3, reading a JSON data model of the arranging process and loading driving logic of various API nodes into a memory;
s4, the front and rear associated nodes of each node of the flow instance are calculated according to the routing lines and the conditions described in the JSON data model;
s5, executing the APIs appointed in the nodes of the flow instance one by one in the memory, asserting the returned result, and selecting the route and the node to be executed in the next step according to the asserting result;
s6, using a queue in a memory to store the current running state of each node of the flow instance and running result data;
and S7, summarizing call result data of the APIs in each node of the flow instance in a memory and outputting the call result data to a call end when the flow is finished, and persisting instance data generated in the memory by the flow instance to a MongoDB database by adopting an asynchronous thread mode.
As a preferable aspect of the above-mentioned technical solution, in step S1, the specific steps for performing the process arrangement include:
s11, creating a new flow;
s12, drawing a flow chart of the API node in the newly created flow;
s13, dragging the existing API nodes into a flow chart, and linking all the nodes according to the execution sequence by using a routing line;
s14, binding API URLs and setting input and output parameters for each dragged API node;
s15, issuing the arranged flow chart file as a new API, and binding a scheduling strategy.
As a preferred embodiment of the foregoing technical solution, in step S2, the step after the process instance object is built in the memory further includes:
s21, judging the concurrency number of the flow instance, if the concurrency number is limited, exiting execution, otherwise, creating a new ProcessEngine to save global flow variable data;
s22, adding the flow instance object into a global executable queue and monitoring the flow instance object when the flow instance object is executed;
s23, constructing a global flow transaction id to uniformly identify a flow instance and a subsequently operated flow node instance.
As a preferable mode of the above technical solution, the specific steps of step S3 include:
s31, reading a JSON data model of the arranging process, analyzing the JSON data model into Document objects, and then preprocessing data;
s32, analyzing all API types and routing lines in the JSON data model, and loading driving logic of each API node into a memory.
As a preferable mode of the above technical solution, the specific steps of step S4 include:
s41, loading calculation condition logic in each route line and carrying out grammar pre-verification, and if errors occur, exiting the execution of the flow;
s42, compiling and calculating the loaded calculation conditions, and eliminating the nodes without logic according to the calculation result;
s43, carrying out upstream and downstream association of the nodes according to the routing relationship, and recalculating the relationship between the API nodes and the routing lines.
As a preferable mode of the above technical solution, the specific steps of step S5 include:
s51, executing an API-driven unified entry execution method according to the type of the API;
s52, acquiring a returned character string result of the API driving execution method, and storing the returned character string result in a global variable of a process instance;
s53, calling an assertion logic configured in the API node and returning corresponding data according to the assertion logic, returning true if the assertion is successful, and returning false if the assertion fails;
s54, calculating a subsequent route according to the assertion result and acquiring a target node of the route for pushing execution.
As a preferable mode of the above-described aspect, the step of outputting the call result data of the API in step S7 includes:
s71, data screening is carried out according to the result data requirements in the API node;
s72, combining result data to be output into a JSON data packet;
s73, marking the execution success and failure of the whole flow instance according to the execution success and failure of each node, and outputting the running result data to the calling end no matter whether the execution of the flow instance is successful or not.
The beneficial effects of the invention are as follows:
the invention can greatly improve the API execution and scheduling efficiency of the micro-service orchestration system, and simultaneously improve the efficiency in management and monitoring, if the business capability of an enterprise is totally opened to the outside in the form of the API, the service orchestration platform has the function of realizing the rapid recombination of the business capability of the enterprise, namely, a new business innovation point can be realized through dragging, pulling and dragging, and then the service is provided to the outside, the orchestration platform is positioned in an advanced capability recombination center above all business systems, and meanwhile, the communication between the enterprise privately-owned business system and the cloud SaaS system and the data center can be realized through the orchestration platform, so that the development cost of the mutual calling of the API can be greatly reduced, and simultaneously, the simultaneous execution and scheduling of the large-concurrency API can be supported.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic block diagram of a system architecture of the present invention;
FIG. 2 is a schematic flow chart of the method of the present invention.
Detailed Description
The invention is further described with reference to the drawings and specific examples. It should be noted that the description of these examples is for aiding in understanding the present invention, but is not intended to limit the present invention. Specific structural and functional details disclosed herein are merely representative of example embodiments of the invention. This invention may, however, be embodied in many alternate forms and should not be construed as limited to the embodiments set forth herein.
It should be appreciated that the terms first, second, etc. are used merely to distinguish between descriptions and should not be construed as indicating or implying relative importance. Although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments of the present invention.
It should be understood that the term "and/or" is merely an association relationship describing the associated object, and means that three relationships may exist, for example, a and/or B may mean: the terms "/and" herein describe another associative object relationship, indicating that there may be two relationships, e.g., a/and B, may indicate that: the character "/" herein generally indicates that the associated object is an "or" relationship.
It should be understood that in the description of the present invention, the terms "upper", "vertical", "inner", "outer", etc. indicate an azimuth or a positional relationship in which the inventive product is conventionally put in use, or an azimuth or a positional relationship that are conventionally understood by those skilled in the art, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the apparatus or elements referred to must have a specific azimuth, be configured and operated in a specific azimuth, and thus should not be construed as limiting the present invention.
It will be understood that when an element is referred to as being "connected," "connected," or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being "directly connected" or "directly coupled" to another element, there are no intervening elements present. Other words used to describe relationships between elements (e.g., "between … …" pair "directly between … …", "adjacent" pair "directly adjacent", etc.) should be interpreted in a similar manner.
In the description of the present invention, it should also be noted that, unless explicitly specified and limited otherwise, the terms "disposed," "mounted," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates to the contrary. It will be further understood that the terms "comprises," "comprising," "includes," "including" and/or "including," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, and do not preclude the presence or addition of one or more other features, numbers, steps, operations, elements, components, and/or groups thereof.
It should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may in fact be executed substantially concurrently or the figures may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
In the following description, specific details are provided to provide a thorough understanding of example embodiments. However, it will be understood by those of ordinary skill in the art that the example embodiments may be practiced without these specific details. For example, a system may be shown in block diagrams in order to avoid obscuring the examples with unnecessary detail. In other embodiments, well-known processes, structures, and techniques may not be shown in unnecessary detail in order to avoid obscuring the example embodiments.
Example 1:
the present embodiment provides a service visualization orchestration system under a micro-service architecture, as shown in fig. 1, including:
the arrangement flow monitoring module: the flow monitoring module is mainly used for monitoring and analyzing the flow running in the internal memory or the flow which is finished and lasting, counting the times of success and failure and the average execution time of each flow, and is mainly used for rapidly positioning the execution condition of each flow, wherein the monitoring data comprises the times of successful execution of the flow, the times of failure, the times to be compensated, the start time of the flow, the end time of the flow, the server IP where the execution flow is located, log data of the actual execution of the flow and the like.
The visual flow programming module: the visual flow arranging module mainly pulls the associated APIs to the drawing area of the input graph in a dragging, pulling and dragging mode in a graphical mode, and then performs logic such as: the relationships of serial execution, parallel execution, asynchronous parallel execution and the like are linked by using a routing line, each node needs to specify the name of an API to be called, the Method of the API, the URL address of the API, the input parameter of the API, the Header information of the API, the assertion logic of the API call result and the like, and after the drawing of the flow chart is completed, the flow chart is stored as a string of JSON data and is associated with a flow execution engine module.
API service management module: the API service management module mainly issues the well-arranged process into a new API interface, when a user calls the API service, the process execution engine immediately runs the process once in a memory and outputs a running result to a calling end, one process allows to issue a plurality of API interfaces with different URLs at the same time, the issued API interfaces can be uniformly managed in the API service management module, and the API can be issued, deleted, modified, input parameter configuration, API test, API right management and the like.
Rule management module: data conversion logic and service code logic exist among nodes in the service programming, unified management of service logic codes is realized through a unified rule management module, the rule codes are written by using pure Java grammar, and the rule management comprises: the rule management module and the visual flow arranging module are used for carrying out association selection.
Scheduling frequency configuration module: the scheduling frequency configuration mainly realizes setting a certain execution frequency and interval time for the scheduled flow, the flow scheduling engine automatically starts and executes the flow instance in the memory according to the setting, and the scheduling module comprises: schedule name, schedule time expression, schedule availability status, creator, creation time, etc.
The flow execution engine module: the flow execution engine module is mainly responsible for loading flow model data, firstly loading all node data of a flow model into a memory according to a flow unique ID, using Map pairs to store in a classified manner according to the type of an API node, and performing flow promotion in the memory according to the following steps:
1. starting an arrangement execution engine;
2. searching whether a flow exists in a memory, and if the flow model exists, immediately creating a main instance object of the flow;
3. immediately searching a starting node of the flow, prompting an error if the flow does not have the starting node, and returning all subsequent route configuration images of the node if the flow does not have the starting node;
4. if the following nodes of the route line exist, immediately searching the route line following the starting node;
5. immediately calculating the bound calculation logic in the route, if the logic is established, finding out the target node and loading the drive of the target node;
6. executing service driving logic of the target node and storing state data of the node into Map images of the memory;
7. the node executes the steps 6.4-6.6 successfully to finish the pushing of the node in the memory of the whole process, and the operation of the whole process is not finished until the node is operated to the end node;
8. and starting an asynchronous thread to persist all running state data and result data in the memory into the MongoDB database.
When the method is implemented, a user can enter a user UI operation interface of the system through a browser to conduct visual arrangement of the flow, and the system finally persists all running state data and result data in the memory into the MongoDB database.
The invention rapidly carries out visualized arrangement and aggregation on large-scale API services in a graphical mode and forms the JSON description format of the flow model, and the state data operated by the flow nodes is comprehensively stored based on the memory when the arrangement engine operates.
Example 2:
the present embodiment provides a service visualization orchestration system under a micro-service architecture, as shown in fig. 2, including the following steps:
s1, carrying out graphical process arrangement on an API service;
s2, constructing a flow instance object corresponding to the arranged flow in the memory;
s3, reading a JSON data model of the arranging process and loading driving logic of various API nodes into a memory;
s4, the front and rear associated nodes of each node of the flow instance are calculated according to the routing lines and the conditions described in the JSON data model;
s5, executing the APIs appointed in the nodes of the flow instance one by one in the memory, asserting the returned result, and selecting the route and the node to be executed in the next step according to the asserting result;
s6, using a queue in a memory to store the current running state of each node of the flow instance and running result data;
and S7, summarizing call result data of the APIs in each node of the flow instance in a memory and outputting the call result data to a call end when the flow is finished, and persisting instance data generated in the memory by the flow instance to a MongoDB database by adopting an asynchronous thread mode.
In step S1, the specific steps for performing the process arrangement include:
s11, creating a new flow;
s12, drawing a flow chart of the API node in the newly created flow;
s13, dragging the existing API nodes into a flow chart, and linking all the nodes according to the execution sequence by using a routing line;
s14, binding API URLs and setting input and output parameters for each dragged API node;
s15, issuing the arranged flow chart file as a new API, and binding a scheduling strategy.
In step S2, the steps after the process instance object is built in the memory further include:
s21, judging the concurrency number of the flow instance, if the concurrency number is limited, exiting execution, otherwise, creating a new ProcessEngine to save global flow variable data;
s22, adding the flow instance object into a global executable queue and monitoring the flow instance object when the flow instance object is executed;
s23, constructing a global flow transaction id to uniformly identify a flow instance and a subsequently operated flow node instance.
The specific steps of the step S3 include:
s31, reading a JSON data model of the arranging process, analyzing the JSON data model into Document objects, and then preprocessing data;
s32, analyzing all API types and routing lines in the JSON data model, and loading driving logic of each API node into a memory.
The specific steps of the step S4 include:
s41, loading calculation condition logic in each route line and carrying out grammar pre-verification, and if errors occur, exiting the execution of the flow;
s42, compiling and calculating the loaded calculation conditions, and eliminating the nodes without logic according to the calculation result;
s43, carrying out upstream and downstream association of the nodes according to the routing relationship, and recalculating the relationship between the API nodes and the routing lines.
The specific steps of the step S5 include:
s51, executing an API-driven unified entry execution method according to the type of the API;
s52, acquiring a returned character string result of the API driving execution method, and storing the returned character string result in a global variable of a process instance;
s53, calling an assertion logic configured in the API node and returning corresponding data according to the assertion logic, returning true if the assertion is successful, and returning false if the assertion fails;
s54, calculating a subsequent route according to the assertion result and acquiring a target node of the route for pushing execution.
The step of outputting call result data of the API in step S7 includes:
s71, data screening is carried out according to the result data requirements in the API node;
s72, combining result data to be output into a JSON data packet;
s73, marking the execution success and failure of the whole flow instance according to the execution success and failure of each node, and outputting the running result data to the calling end no matter whether the execution of the flow instance is successful or not.
The system can well recover the node data and recover and re-run the failed node by adopting the system.
The invention follows the BPMN2.0 specification on the design of the flow chart, thus being beneficial to the personnel who are familiar with the workflow originally to rapidly arrange and draw the API service. The method realizes the complete visual flow playback capability, visual data hastening capability and API call real-time monitoring capability on the monitoring of the arranging flow, and can count the average performance, running times and failure times of the arranging flow.
Example 3:
as an optimization of the above embodiment, the flow engine interface logic definition code is as follows:
the drive interface of the orchestration node is defined as follows:
/>
the invention is not limited to the alternative embodiments described above, but any person may derive other various forms of products in the light of the present invention. The above detailed description should not be construed as limiting the scope of the invention, which is defined in the claims and the description may be used to interpret the claims.

Claims (6)

1. The service visualization arrangement method under the micro service architecture is characterized by comprising the following steps:
s1, carrying out graphical process arrangement on an API service;
s2, constructing a flow instance object corresponding to the arranged flow in the memory;
s3, reading a JSON data model of the arranging process and loading driving logic of various API nodes into a memory, wherein the method specifically comprises the following steps:
s31, reading a JSON data model of the arranging process, analyzing the JSON data model into Document objects, and then preprocessing data;
s32, analyzing all API types and routing lines in the JSON data model, and loading driving logic of each API node into a memory;
s4, the front and rear associated nodes of each node of the flow instance are calculated according to the routing lines and the conditions described in the JSON data model;
s5, executing the APIs appointed in the nodes of the flow instance one by one in the memory, asserting the returned result, and selecting the route and the node to be executed in the next step according to the asserting result;
s6, using a queue in a memory to store the current running state of each node of the flow instance and running result data;
and S7, summarizing call result data of the APIs in each node of the flow instance in a memory and outputting the call result data to a call end when the flow is finished, and persisting instance data generated in the memory by the flow instance to a MongoDB database by adopting an asynchronous thread mode.
2. The method for arranging service visualization under a micro-service architecture according to claim 1, wherein: in step S1, the specific steps for performing the process arrangement include:
s11, creating a new flow;
s12, drawing a flow chart of the API node in the newly created flow;
s13, dragging the existing API nodes into a flow chart, and linking all the nodes according to the execution sequence by using a routing line;
s14, binding API URLs and setting input and output parameters for each dragged API node;
s15, issuing the arranged flow chart file as a new API, and binding a scheduling strategy.
3. The method for arranging service visualization under a micro-service architecture according to claim 1, wherein: in step S2, the steps after the process instance object is built in the memory further include:
s21, judging the concurrency number of the flow instance, if the concurrency number is limited, exiting execution, otherwise, creating a new ProcessEngine to save global flow variable data;
s22, adding the flow instance object into a global executable queue and monitoring the flow instance object when the flow instance object is executed;
s23, constructing a global flow transaction id to uniformly identify a flow instance and a subsequently operated flow node instance.
4. The method for arranging service visualization under a micro-service architecture according to claim 1, wherein: the specific steps of the step S4 include:
s41, loading calculation condition logic in each route line and carrying out grammar pre-verification, and if errors occur, exiting the execution of the flow;
s42, compiling and calculating the loaded calculation conditions, and eliminating the nodes without logic according to the calculation result;
s43, carrying out upstream and downstream association of the nodes according to the routing relationship, and recalculating the relationship between the API nodes and the routing lines.
5. The method for arranging service visualization under a micro-service architecture according to claim 1, wherein: the specific steps of the step S5 include:
s51, executing an API-driven unified entry execution method according to the type of the API;
s52, acquiring a returned character string result of the API driving execution method, and storing the returned character string result in a global variable of a process instance;
s53, calling an assertion logic configured in the API node and returning corresponding data according to the assertion logic, returning true if the assertion is successful, and returning false if the assertion fails;
s54, calculating a subsequent route according to the assertion result and acquiring a target node of the route for pushing execution.
6. The method for arranging service visualization under a micro-service architecture according to claim 1, wherein: the step of outputting call result data of the API in step S7 includes:
s71, data screening is carried out according to the result data requirements in the API node;
s72, combining result data to be output into a JSON data packet;
s73, marking the execution success and failure of the whole flow instance according to the execution success and failure of each node, and outputting the running result data to the calling end no matter whether the execution of the flow instance is successful or not.
CN201911416158.6A 2019-12-31 2019-12-31 Service visual arrangement system and method under micro service architecture Active CN111142867B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911416158.6A CN111142867B (en) 2019-12-31 2019-12-31 Service visual arrangement system and method under micro service architecture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911416158.6A CN111142867B (en) 2019-12-31 2019-12-31 Service visual arrangement system and method under micro service architecture

Publications (2)

Publication Number Publication Date
CN111142867A CN111142867A (en) 2020-05-12
CN111142867B true CN111142867B (en) 2024-04-02

Family

ID=70522740

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911416158.6A Active CN111142867B (en) 2019-12-31 2019-12-31 Service visual arrangement system and method under micro service architecture

Country Status (1)

Country Link
CN (1) CN111142867B (en)

Families Citing this family (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111694888A (en) * 2020-06-12 2020-09-22 谷云科技(广州)有限责任公司 Distributed ETL data exchange system and method based on micro-service architecture
CN111722929A (en) * 2020-06-18 2020-09-29 南京龙猫商业智能科技股份有限公司 Micro-service orchestration engine management method based on PaaS
CN113923250B (en) * 2020-07-07 2023-04-07 华为技术有限公司 Method, device and system for assisting network service arrangement
CN112015372B (en) * 2020-07-24 2022-12-23 北京百分点科技集团股份有限公司 Heterogeneous service arranging method, processing method and device and electronic equipment
CN112083912B (en) * 2020-08-17 2024-03-12 山东中创软件商用中间件股份有限公司 Service orchestration intermediate result processing method, device, equipment and storage medium
CN112085591B (en) * 2020-09-03 2023-11-07 广州嘉为科技有限公司 Visual arrangement method for running batch at bank based on graph theory
CN112068825A (en) * 2020-09-23 2020-12-11 山东泽鹿安全技术有限公司 Visual linkage arrangement method capable of realizing isomerization nodes
CN112256258A (en) * 2020-10-22 2021-01-22 北京神州数字科技有限公司 Micro-service arrangement automatic code generation method and system
CN112462629A (en) * 2020-11-06 2021-03-09 蘑菇物联技术(深圳)有限公司 Interpretation method of controller control algorithm
CN112380199A (en) * 2020-11-10 2021-02-19 珠海市新德汇信息技术有限公司 Big data end-to-end data reconciliation arrangement method
CN112418784B (en) * 2020-11-11 2021-11-30 北京京航计算通讯研究所 Service arranging and executing system and method
CN112379884B (en) * 2020-11-13 2024-01-12 李斌 Method and system for realizing flow engine based on Spark and parallel memory calculation
CN112506498A (en) * 2020-11-30 2021-03-16 广东电网有限责任公司 Intelligent visual API arrangement method, storage medium and electronic equipment
CN112486073B (en) * 2020-12-03 2022-04-19 用友网络科技股份有限公司 Robot control method, control system and readable storage medium
CN112558934B (en) * 2020-12-10 2024-01-05 中盈优创资讯科技有限公司 Control subtask engine device based on arranging control flow business opening
CN112698878A (en) * 2020-12-18 2021-04-23 浙江中控技术股份有限公司 Calculation method and system based on algorithm microservice
CN112685011B (en) * 2020-12-21 2022-06-07 福建新大陆软件工程有限公司 AI application visualization arrangement method based on Vue
CN112685004B (en) * 2020-12-21 2022-08-05 福建新大陆软件工程有限公司 Online component arrangement calculation method and system based on real-time stream calculation
CN112966202A (en) * 2021-03-03 2021-06-15 浪潮云信息技术股份公司 Method for realizing integration of multiple government affair services
CN113254004B (en) * 2021-04-13 2023-02-21 西安热工研究院有限公司 Data statistics platform based on WF4.0 framework
CN113157268B (en) * 2021-04-26 2024-03-22 绵阳市智慧城市产业发展有限责任公司 Equipment state processing system combining flow engine and Internet of things
CN113268319A (en) * 2021-05-07 2021-08-17 中国电子科技集团公司第五十四研究所 Business process customization and distributed process scheduling method based on micro-service architecture
CN113238844A (en) * 2021-05-17 2021-08-10 上海中通吉网络技术有限公司 Service arrangement execution path playback method
CN113726871B (en) * 2021-08-27 2024-02-02 猪八戒股份有限公司 Scheduling method and system for automatic code release
CN113791766B (en) * 2021-09-16 2023-05-16 易保网络技术(上海)有限公司 Method for combining data interfaces, electronic device and readable storage medium
CN113805870B (en) * 2021-09-18 2024-01-30 上海熙菱信息技术有限公司 BFF architecture-based service interface arrangement method and system
WO2023159573A1 (en) * 2022-02-28 2023-08-31 西门子股份公司 Interface mapping method and apparatus, and electronic device and computer-readable medium
CN114691233A (en) * 2022-03-16 2022-07-01 中国电子科技集团公司第五十四研究所 Remote sensing data processing plug-in distributed scheduling method based on workflow engine
CN115202641B (en) * 2022-09-13 2023-02-03 深圳联友科技有限公司 Method for mixed task arrangement engine without limit of development language
CN115509523B (en) * 2022-11-24 2023-03-03 湖南创星科技股份有限公司 API service rapid construction method and system
CN115955408A (en) * 2022-12-23 2023-04-11 上海基煜基金销售有限公司 Application arrangement service system and method based on Conductor framework
CN116860362B (en) * 2023-07-05 2024-03-19 广州市玄武无线科技股份有限公司 Plug-in transaction management method and device applied to flow programming engine
CN117806611B (en) * 2024-02-29 2024-05-14 鱼快创领智能科技(南京)有限公司 Method for creating new service interface based on visual automatic arrangement of interface discovery

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102034151A (en) * 2010-12-13 2011-04-27 东莞市高鑫机电科技服务有限公司 SOA-based enterprise collaboration management system service flow design method and system
CN105512304A (en) * 2015-12-11 2016-04-20 西安道同信息科技有限公司 Method for generating internet applications on line, system integration method and supporting platform
CN106846226A (en) * 2017-01-19 2017-06-13 湖北省基础地理信息中心(湖北省北斗卫星导航应用技术研究院) A kind of space time information assembling management system
CN108279866A (en) * 2018-01-24 2018-07-13 马上消费金融股份有限公司 A kind of the layout execution method, apparatus and medium of operation flow
CN108681451A (en) * 2018-05-14 2018-10-19 浪潮软件集团有限公司 Visual kubernets micro-service arrangement implementation method
CN109634561A (en) * 2018-10-16 2019-04-16 阿里巴巴集团控股有限公司 A kind of online visual programming method and device
CN110442481A (en) * 2019-07-10 2019-11-12 阿里巴巴集团控股有限公司 Method for processing business, Service Component container and electronic equipment
CN110532020A (en) * 2019-09-04 2019-12-03 中国工商银行股份有限公司 A kind of data processing method of micro services layout, apparatus and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1639458A4 (en) * 2003-06-12 2010-05-05 Reuters America Business process automation
US10389602B2 (en) * 2016-12-05 2019-08-20 General Electric Company Automated feature deployment for active analytics microservices

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102034151A (en) * 2010-12-13 2011-04-27 东莞市高鑫机电科技服务有限公司 SOA-based enterprise collaboration management system service flow design method and system
CN105512304A (en) * 2015-12-11 2016-04-20 西安道同信息科技有限公司 Method for generating internet applications on line, system integration method and supporting platform
CN106846226A (en) * 2017-01-19 2017-06-13 湖北省基础地理信息中心(湖北省北斗卫星导航应用技术研究院) A kind of space time information assembling management system
CN108279866A (en) * 2018-01-24 2018-07-13 马上消费金融股份有限公司 A kind of the layout execution method, apparatus and medium of operation flow
CN108681451A (en) * 2018-05-14 2018-10-19 浪潮软件集团有限公司 Visual kubernets micro-service arrangement implementation method
CN109634561A (en) * 2018-10-16 2019-04-16 阿里巴巴集团控股有限公司 A kind of online visual programming method and device
CN110442481A (en) * 2019-07-10 2019-11-12 阿里巴巴集团控股有限公司 Method for processing business, Service Component container and electronic equipment
CN110532020A (en) * 2019-09-04 2019-12-03 中国工商银行股份有限公司 A kind of data processing method of micro services layout, apparatus and system

Also Published As

Publication number Publication date
CN111142867A (en) 2020-05-12

Similar Documents

Publication Publication Date Title
CN111142867B (en) Service visual arrangement system and method under micro service architecture
CN106528424B (en) Test method and test platform based on background system service or interface
US10185615B2 (en) Analysis stack for an event flow
US20200174917A1 (en) Rapid Automation First-pass Testing Framework
US20150212932A1 (en) Asynchronous code testing
CN102222043B (en) Testing method and testing device
US20110047415A1 (en) Debugging of business flows deployed in production servers
CN110928772A (en) Test method and device
KR101797185B1 (en) Efficiently collecting transaction-separated metrics in a distributed environment
CN105243528A (en) Financial IT system graphical centralized reconciliation system and method under big data environment
US10180959B2 (en) Component independent process integration message search
CN108255585B (en) SDK abnormal control and application program operation method, device and equipment thereof
US20210096981A1 (en) Identifying differences in resource usage across different versions of a software application
US20110145518A1 (en) Systems and methods for using pre-computed parameters to execute processes represented by workflow models
CN106062738B (en) Manage job state
CN113485911A (en) Test data generation platform based on banking business
CN113962597A (en) Data analysis method and device, electronic equipment and storage medium
US20220012167A1 (en) Machine Learning Based Test Coverage In A Production Environment
US11995587B2 (en) Method and device for managing project by using data merging
US11954134B2 (en) Visualization of complex hierarchy data with interactive adjustments
CN115794917A (en) Method and device for importing resource data
CN115480753A (en) Application integration system and corresponding computer device and storage medium
CN114416064A (en) Distributed service arranging system and method based on BPMN2.0
CN114936152A (en) Application testing method and device
CN114493570A (en) Cross-border remittance processing method and device based on group internal private line

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant