CN115712375A - Service arrangement method, flow processing method based on service arrangement and computing equipment - Google Patents

Service arrangement method, flow processing method based on service arrangement and computing equipment Download PDF

Info

Publication number
CN115712375A
CN115712375A CN202211235877.XA CN202211235877A CN115712375A CN 115712375 A CN115712375 A CN 115712375A CN 202211235877 A CN202211235877 A CN 202211235877A CN 115712375 A CN115712375 A CN 115712375A
Authority
CN
China
Prior art keywords
node
service
information
capability
flow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211235877.XA
Other languages
Chinese (zh)
Inventor
方利
张中迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Cheerbright Technologies Co Ltd
Original Assignee
Beijing Cheerbright Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Cheerbright Technologies Co Ltd filed Critical Beijing Cheerbright Technologies Co Ltd
Priority to CN202211235877.XA priority Critical patent/CN115712375A/en
Publication of CN115712375A publication Critical patent/CN115712375A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a service arranging method, a flow processing method based on service arranging, a computing device and a storage medium, wherein the service arranging method comprises the following steps: in response to a first operation of triggering service layout, displaying a service layout interface, wherein the service layout interface comprises an element list area and a canvas work area, the element list area comprises one or more element lists, each element list comprises at least one element, and each element corresponds to data in a capability list; responding to a second operation of dragging each element related to the current flow in the element list area to the canvas working area, and displaying nodes corresponding to the dragged elements in the canvas working area; responding to a third operation of information configuration on each node, and determining node information of each node; responding to dragging operation of the canvas work area on the nodes, and determining the connection relation among the nodes; and generating flow metadata of the current flow according to the node information and the connection relation of each node so as to complete corresponding service arrangement.

Description

Service arrangement method, flow processing method based on service arrangement and computing equipment
Technical Field
The present invention relates to the field of internet technologies, and in particular, to a service orchestration method, a service orchestration-based flow processing method, a computing device, and a storage medium.
Background
In a traditional foreground-background architecture, all projects are independent from each other, and functions may be repeated, so that the projects are more complicated, and development efficiency is lowered. Based on this, it is necessary to integrate an intermediate organization, i.e. the central station, to provide common resources for all projects.
The foreground usually includes not only the interface directly interacting with the user, but also various service logics of the server end responding to the user request in real time, the background is a management system, a configuration system and the like for internal operators, and the middle station integrates a certain general capability and provides the capability for the foreground, including the service middle station, the technical middle station, the data middle station, the algorithm middle station and the like.
The service center integrates the common services of each project into a universal service platform, and when the service center is applied to e-commerce services, a center architecture can be built by adopting a standardized SOP (Standard Operating Procedure) plus a reserved extension point. The method is suitable for scenes with more service lines, can realize efficient multiplexing of resources, has higher professional requirements on service architects, and is deficient in new service support due to the fact that SOP is a standard process and lacks flexibility.
In order to better embody the middle station mode, the common capability of the multi-service lines of the middle station of the electric business can be abstracted through field driven modeling, an extension point is reserved, and then the common capability is combined by using service arrangement. At present, the common service arrangement methods mainly include three methods, the first method is to use hard coding to realize arrangement among services, but the method does not accord with the switching principle of software development, is not friendly to testers and product managers, is more suitable for simple scenes, and has poor expandability.
The second is that the realization of tasks and the cooperation relationship of the tasks are separated by using a Business engine JBMP (Java Business Process Management), a workflow engine Activiti and the like, and the flexibility of the system can be improved by verifying the service arrangement. However, most of the scheduling of JBMP and Activiti is a manual approval task, which means that the task flow efficiency and throughput are low, while most of the micro-services in the e-commerce field are programmed automatic tasks, which means that the tasks flow efficiently and the system throughput is high. In addition, the reliability of the JBMP and the Activiti is low, the application is a single-point architecture, a synchronous response calling mode, and is highly dependent on a database, which is not enough to support a complex scene of electronic commerce, and is difficult to apply to service arrangement of a platform in the electronic commerce.
And in the third mode, the Netflix Conductor (an open source micro-service arrangement framework) is adopted to realize workflow and distributed scheduling, and the performance is excellent. However, the core code is tightly coupled with the DSL (Domain Specific Language) of the user, the expansion capability is insufficient, the number of internal application scenarios is small, and because there is no chinese interface, visual arrangement is not supported, and there is a certain gap if the system is used as an enterprise-level service arrangement platform.
Therefore, a new service orchestration scheme is needed to optimize the above process.
Disclosure of Invention
To this end, the present invention provides a service orchestration scheme in an attempt to solve, or at least alleviate, the above-presented problems.
According to an aspect of the present invention, there is provided a service orchestration method, comprising the steps of: firstly, responding to a first operation of triggering service layout, displaying a service layout interface, wherein the service layout interface comprises an element list area and a canvas working area, the element list area comprises one or more element lists, the element lists comprise at least one element, and the element corresponds to data in a capability list; responding to a second operation of dragging each element related to the current flow in the element list area to the canvas working area, and displaying nodes corresponding to the dragged elements in the canvas working area; responding to a third operation of information configuration on each node, and determining node information of each node; responding to dragging operation of the canvas work area on the nodes, and determining the connection relation among the nodes; and generating flow metadata of the current flow according to the node information and the connection relation of each node so as to finish corresponding service arrangement.
Optionally, in the service orchestration method according to the present invention, the service orchestration interface further includes a basic configuration area, and the step of determining the node information of each node in response to a third operation of performing information configuration on each node includes: for any node, responding to the selection operation of the node, and displaying an information configuration page corresponding to the node in a basic configuration area; and determining the node information of the node in response to the configuration operation of the node information in the information configuration page.
Optionally, in the service orchestration method according to the present invention, the step of determining a connection relationship between nodes in response to a dragging operation between nodes in the canvas workspace, includes: responding to the dragging operation from any node to another node, and displaying a directed connecting line from the node to the another node in the canvas working area according to the dragging direction when the preset condition is met; and determining the connection relation among the nodes based on the directed connection lines among the nodes.
Optionally, in the service orchestration method according to the present invention, the service orchestration interface further includes a toolbar area, the toolbar area includes a flow saving control, and the method further includes: and responding to the clicking operation of the flow saving control, and storing the flow metadata of the current flow into the flow table so as to save the current flow.
Optionally, in the service orchestration method according to the present invention, further comprising: and binding and associating the current process with the corresponding service identity, and storing the associated information into a process and service identity relation table.
Optionally, in the service orchestration method according to the present invention, further comprising: and responding to the issuing operation of the current flow, writing the flow information of the current flow into a preset service middleware, and storing according to a storage path corresponding to the flow code of the current flow, wherein the flow information comprises basic information, associated information and flow metadata.
According to another aspect of the present invention, there is provided a service orchestration-based flow processing method, including the steps of: firstly, a business system is started, and a class of a target annotation label is scanned through the business system so as to extract relevant fields and store the relevant fields in a capability table; establishing connection with a preset service middleware based on a flow execution engine embedded in a business system; reading each sub-node under the corresponding storage path from the service middleware, and creating a listener of each sub-node corresponding to the process to monitor the process; and analyzing the process information of the corresponding process of each child node and writing the process information into the memory of the virtual machine, wherein the process information comprises basic information, associated information and process metadata.
Alternatively, in the service orchestration based flow processing method according to the present invention, the target annotations include capability annotations and extension annotations, scanning the class of the target note label through a business system to extract related fields and store the related fields in an ability table, wherein the step comprises the following steps: scanning the type of the capability annotation label through a business system, analyzing a capability field in the capability annotation, and storing the capability field in a capability table; and scanning the class of the extended note annotation through the business system, analyzing the extended field in the extended note, and storing the extended field in the capability table.
Optionally, in the flow processing method based on service orchestration according to the present invention, the method further includes: creating capability notes in advance, wherein capability fields in the capability notes comprise field codes, capability names and capability descriptions; and for the code of the business system, adding a capability annotation on the capability class formed by abstraction, and perfecting the attribute information of the capability annotation.
Optionally, in the flow processing method based on service orchestration according to the present invention, the method further includes: if the monitor monitors that the process information of the corresponding process is changed, the monitor acquires the changed process information from the service middleware; and analyzing the changed flow information and writing the flow information into the memory of the virtual machine.
Optionally, in the flow processing method based on service orchestration according to the present invention, the method further includes: and reading the process metadata from the memory of the virtual machine, and writing the process metadata into a context container of the process execution engine.
Optionally, in the flow processing method based on service orchestration according to the present invention, the method further includes: determining a service identity; and calling the process execution engine to search the process corresponding to the service identity from the process and service identity relation table through the process execution engine, and operating the searched process.
According to yet another aspect of the invention, there is provided a computing device comprising: at least one processor; and a memory storing program instructions, wherein the program instructions are configured to be adapted to be executed by the at least one processor, the program instructions comprising instructions for performing the service orchestration method or the service orchestration based flow processing method as described above.
According to still another aspect of the present invention, there is provided a readable storage medium storing program instructions, which, when read and executed by a computing device, cause the computing device to execute the service orchestration method or the service orchestration-based flow processing method as described above.
According to the service arrangement scheme, when service arrangement is carried out, elements can be dragged and connected with nodes through the service arrangement interface, a visual service arrangement workbench is provided, the capacity of a middle platform can be efficiently reused through arrangement and recombination, the expandability of the system is improved, the arrangement efficiency is greatly improved, the workload of hierarchical arrangement of codes and configuration files is reduced, errors are not prone to occurring, and the service arrangement scheme is friendly to new hands. When the process is processed based on service arrangement, the service middleware is used for creating the listener to monitor the dynamic state of the process, so that the problems of data consistency and reliability of the database and the process execution engine are solved, the consumption of database query is effectively reduced through a mechanism of writing process information into a virtual machine memory, the efficiency of reading process information is ensured, and the process execution performance is improved.
According to the technical scheme, on one hand, a service arrangement workbench is provided, on the other hand, a process execution technology after service arrangement is provided, the process of the middle station and business logic can be decoupled by using the service arrangement, capability reuse and flexibility are really achieved, the middle station is facilitated to achieve the purposes of cost reduction and efficiency improvement, and the industry competitiveness of an enterprise is finally improved.
Drawings
To the accomplishment of the foregoing and related ends, certain illustrative aspects are described herein in connection with the following description and the annexed drawings, which are indicative of various ways in which the principles disclosed herein may be practiced, and all aspects and equivalents thereof are intended to be within the scope of the claimed subject matter. The above and other objects, features and advantages of the present disclosure will become more apparent from the following detailed description read in conjunction with the accompanying drawings. Throughout this disclosure, like reference numerals generally refer to like parts or elements.
FIG. 1 shows a block diagram of a computing device 100, according to an embodiment of the invention;
FIG. 2 illustrates a flow diagram of a method 200 for service orchestration based flow processing according to one embodiment of the invention;
FIG. 3 illustrates a flow diagram of a service orchestration method 300 according to one embodiment of the invention; and
FIG. 4 shows a schematic diagram of a service orchestration interface according to one embodiment of the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
FIG. 1 shows a block diagram of a computing device 100, according to one embodiment of the invention.
As shown in FIG. 1, in a basic configuration 102, a computing device 100 typically includes a system memory 106 and one or more processors 104. A memory bus 108 may be used for communication between the processor 104 and the system memory 106.
Depending on the desired configuration, the processor 104 may be any type of processing, including but not limited to: a microprocessor (UP), a microcontroller (UC), a digital information processor (DSP), or any combination thereof. The processor 104 may include one or more levels of cache, such as a level one cache 110 and a level two cache 112, a processor core 114, and registers 116. Example processor cores 114 may include Arithmetic Logic Units (ALUs), floating Point Units (FPUs), digital signal processing cores (DSP cores), or any combination thereof. The example memory controller 118 may be used with the processor 104, or in some implementations the memory controller 118 may be an internal part of the processor 104.
Depending on the desired configuration, system memory 106 may be any type of memory, including but not limited to: volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.), or any combination thereof. System memory 106 may include an operating system 120, one or more applications 122, and program data 124. In some implementations, the application 122 can be arranged to execute instructions on an operating system with the program data 124 by the one or more processors 104.
Computing device 100 also includes a storage device 132, storage device 132 including removable storage 136 and non-removable storage 138.
Computing device 100 may also include a storage interface bus 134. The storage interface bus 134 enables communication from the storage devices 132 (e.g., removable storage 136 and non-removable storage 138) to the basic configuration 102 via the bus/interface controller 130. Operating system 120, applications 122, and at least a portion of program data 124 may be stored on removable storage 136 and/or non-removable storage 138, and loaded into system memory 106 via storage interface bus 134 and executed by one or more processors 104 when computing device 100 is powered on or applications 122 are to be executed.
Computing device 100 may also include an interface bus 140 that facilitates communication from various interface devices (e.g., output devices 142, peripheral interfaces 144, and communication devices 146) to the basic configuration 102 via the bus/interface controller 130. The example output device 142 includes a graphics processing unit 148 and an audio processing unit 150. They may be configured to facilitate communication with various external devices, such as a display or speakers, via one or more a/V ports 152. Example peripheral interfaces 144 may include a serial interface controller 154 and a parallel interface controller 156, which may be configured to facilitate communication with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device) or other peripherals (e.g., printer, scanner, etc.) via one or more I/O ports 158. The example communication device 146 may include a network controller 160, which may be arranged to facilitate communications with one or more other computing devices 162 over a network communication link via one or more communication ports 164.
The network communication link may be one example of a communication medium. Communication media may typically be embodied by computer readable instructions, data structures, program modules, and may include any information delivery media, such as carrier waves or other transport mechanisms, in a modulated data signal. A "modulated data signal" may be a signal that has one or more of its data set or its changes in such a manner as to encode information in the signal. By way of non-limiting example, communication media may include wired media such as a wired network or private-wired network, and various wireless media such as acoustic, radio Frequency (RF), microwave, infrared (IR), or other wireless media. The term computer readable media as used herein may include both storage media and communication media.
Computing device 100 may be implemented as a personal computer including both desktop and notebook computer configurations. Of course, computing device 100 may also be implemented as part of a small-form factor portable (or mobile) electronic device such as a cellular telephone, a digital camera, a Personal Digital Assistant (PDA), a personal media player device, a wireless web-watch device, a personal headset, an application specific device, or a hybrid device that include any of the above functions. And may even be implemented as a server, such as a file server, a database server, an application server, a WEB server, and so forth. The embodiments of the present invention are not limited thereto.
In an embodiment in accordance with the invention, computing device 100 is configured to execute a service orchestration based flow process method 200 or a service orchestration method 300 in accordance with the invention. Wherein the application 122 disposed on the operating system comprises a plurality of program instructions for executing the service orchestration based flow processing method 200 according to the present invention, or the service orchestration method 300, which can instruct the processor 104 to execute the service orchestration based flow processing method 200 according to the present invention, or the service orchestration method 300 according to the present invention, so that the computing device 100 processes a flow by executing the service orchestration based flow processing method 200 according to the present invention, or orchestrates a service by executing the service orchestration method 300 according to the present invention.
FIG. 2 shows a flow diagram of a method 200 for service orchestration based flow processing according to one embodiment of the invention. The flow process method 200 based on service orchestration may be performed in a computing device (e.g., computing device 100 described above).
As shown in fig. 2, the method 200 begins at step S210. In step S210, a business system is started, and the class of the target annotation is scanned by the business system to extract related fields and store the fields in a capability table. Wherein the target annotation comprises a capability annotation and an extension annotation.
According to an embodiment of the present invention, before starting the business system, the method 200 further includes creating capability annotations in advance, attaching capability annotations to the code of the business system, on the abstracted capability classes, and refining attribute information of the capability annotations. In this embodiment, the capability field in the capability annotation includes a realm code, a capability name, and a capability description.
For example, the Java (a programming language) annotation @ BaseAbiliity is created as a capability annotation to indicate the capabilities of a station in an e-commerce service, including capability fields domainCode, code, name, desc. Wherein, the domainCode represents the domain code, such as the domain code of the order domain = order, and the domain code of the promotion domain = movement. And code, name and desc respectively represent a capability code, a capability name and a capability description, and for a scene of successfully sending the short message through payment, code = order pay success sms, name = successfully sending the short message through payment, and desc = processing the short message sent by the user after the payment is successful.
After the creation of the capability annotation is completed, the @ BaseAbiliity annotation is added to the class of the capability of the electric business middle station formed by abstraction, wherein the class of the capability of the electric business middle station is the class for processing the business capability, and the general constraint is Abiliity ending. For example, in a scenario where a payment succeeds in sending a short message, the class name is paysucesssmsavailability, and the class has a note @ BaseAbility, and the pseudo code is as follows:
@ BaseAbiliity (domainCode = order, code = order _ pay _ success _ sms, name = payment success sending short message, desc = processing of short message sent by user after payment success)
Public Class PaySuccessSmsAbility{
}
There are also many similar classes, such as the ability to deduct inventory class DecreasetStockAbiliity, and the like. The attribute information of the capability annotation @ baseabstract is actually the information of the capability fields domainCode, code, name and desc, in other words, completing the processing of the field information means that the attribute information is perfected.
The extended annotations are different from the capability annotations, which are components of a COLA (Clean Object-organized and layerwise Architecture) framework, whereas the business system relies on the COLA framework, and thus, the extended annotations are directly referenceable without additional creation. The Extension annotation is typically @ Extension, and includes the Extension fields useCase, bizId, scenario.
The usecrase represents a scene, which foreground shopping mall orders are identified in the e-commerce business, for example, userCase =10 represents the order of the car shopping mall, and userCase =11 represents the order of the point shopping mall. And bizId represents the service line, and the station identifies which service line order in the electric business, for example, bizId =10 represents the manufacturer's flagship store order, and bizId =11 represents the vehicle business order. Finally, scenario represents an identification of an order generated by the commodity, whether the order is a physical logistics order or a coupon code of a virtual membership card. This distinction is made in consideration of the fact that there is a certain intersection between services, for example, bizId =11 car will be sold in both points and car cities, but the presentation form and the subsequent order process of the two will be different.
The @ Extension note is generally labeled to an implementation class of an Extension point, in a scene of successfully sending a short message by payment, the capability class is named as paysuccess smsavailability, and depends on an Extension point class GetSmsContentExt, which is an interface, and has a plurality of implementation classes, for example, a getpaylayedsmscontentextion for sending a successfully sent short message has pseudo codes as follows:
@Extension(useCase=xx,bizId=xx,scenario=xx)
Public class GetPayedSmsContentExtenion implement GetSmsContentExt{
}
according to an embodiment of the present invention, the following manner may be adopted to scan the class of the target annotation label through the business system, so as to extract the relevant fields and store the relevant fields in the capability table. In this embodiment, the business system scans the classes of the capability annotation tags, parses the capability fields in the capability annotations, and stores them in the capability table, and the business system scans the classes of the extended annotation tags, parses the extended fields in the extended annotations, and stores them in the capability table.
It can be seen that the data in the capability table is actually the data extracted by parsing the capability field and the extension field, and the capability table is usually stored in a database, and the data in each capability table includes the domain to which the capability belongs, the code, the name, the description, the class name and the method name corresponding to the capability, the creation time, the update time, and the like. The following is exemplary code to create a capability table:
CREATE TABLE'trade_domain_service'(
'id' binary (20) NOT NULL AUTO _ INCREANT COMMENT 'autoincreant',
'domain _ code' varchar (50) NOT NULL COMMENT 'domain coding',
'service _ code' varchar (50) DEFAULT NULL COMMENT 'service code',
'service _ name' varchar (100) DEFAULT NULL COMMENT 'Domain service/Domain capability',
……
PRIMARY KEY('id'),
UNIQUE KEY'uniq_service_code'('service_code'),
KEY'idx_route_type'('route_type')
)ENGINE=InnoDB AUTO_INCREMENT=42DEFAULT CHARSET=utf8mb4
measure = 'domain information detail table';
according to an embodiment of the present invention, when the business system is started, a flow execution engine embedded in the business system in a JAR package (also called Java Archive File, which is called Java Archive File entirely) manner is also started. The flow execution engine can be realized by using an Apache cache (a routing and media engine based on rules), the Apache cache based on event driving has good performance and throughput, at present, more than three hundred expansion components are provided, and the expansion mechanism is extremely convenient and flexible.
In step S220, a connection is established with a preset service middleware based on a process execution engine embedded in the business system. According to an embodiment of the present invention, if the ZooKeeper (a distributed application coordination service) is selected as the service middleware, the connection between the execution engine of the initialization procedure and the ZooKeeper client is initiated. In addition, the listener of the corresponding storage path in the ZooKeeper can be registered so as to carry out path monitoring, and therefore, when a flow is newly added or deleted, an event changing by monitoring the path can be processed in time.
Subsequently, step S230 is performed, and each child node under the corresponding storage path is read from the service middleware, and a listener corresponding to the flow is created for each child node to perform flow monitoring. According to one embodiment of the invention, each child node corresponds to a process, and by creating a listener of the process, the process information can be updated when changed.
Since the storage structure of the ZooKeeper is tree-shaped storage, in this embodiment, it is assumed that the flow code of the flow P1 is flow _ cancel _ order, the storage path of the flow P1 inside the ZooKeeper is/flow/flow _ cancel _ order, and then the child node under the/flow/flow _ cancel _ order path is read from the ZooKeeper, the flow corresponding to the child node is the flow P1, and the corresponding listener is created for the child node to perform real-time monitoring.
Finally, in step S240, the process information of the process corresponding to each child node is analyzed and written into the virtual machine memory, where the process information includes basic information, associated information, and process metadata. The basic information includes information such as a process name, a process code (unique identifier of the process), a process description, a version number, an event triggering the process, a process field, a release state, file arrangement information (used for describing an execution process of the process), creation time, a creator, and modification time.
The associated information comprises a business identity set which is searched out according to the process codes from the process and business identity relation table. The flow metadata is generally in JSON (JSON Object Notation) format, and may also be referred to as execution information, and includes information such as flow codes, flow identifiers, node information, and connection relationships, which are generally generated during the service layout process, and this part of the content will be presented in the relevant description of the service layout method 300.
The Virtual Machine memory may adopt a JVM (Java Virtual Machine) memory, and store the process information in the form of key-value pairs, that is, write the process information into the JVM memory. According to an embodiment of the invention, the method 200 further comprises: and reading the process metadata from the memory of the virtual machine, and writing the process metadata into a context container of the process execution engine.
In this embodiment, the flow metadata is read from the JVM memory, the information of each node of each flow is parsed into the routing information of the Apache cam as the flow execution engine, the parsing is performed in the form of a concatenation character string, then the routing information is concatenated into an XML (eXtensible Markup Language) file, and then the XML file is written into a Context container cam Context of the Apache cam (a run-time container constructed by the Apache cam, and a routing rule is executed), thereby completing the loading of the flow.
The process of the flow processing relates to the result of service arrangement, for example, after the business system scans the class of the target annotation label and extracts the relevant field from the class to store the field in the capability table and starts the flow execution engine to establish the connection with the service middleware, the service arrangement can be started in the management background so as to generate the corresponding flow, further issue the flow and write the flow information into the service middleware, so that the flow execution engine can read the required flow information from the service middleware.
Because the process execution engine is embedded in the business system, the process execution engine and the business system are deployed in the same computing device, the management background for service arrangement is obviously deployed in another computing device, and the database and the service middleware are respectively operated on other independent computing devices.
FIG. 3 shows a flow diagram of a service orchestration method 300 according to one embodiment of the invention. While the service orchestration method 300 may be performed in a computing device (e.g., the computing device 100 described above), it should be noted that the service orchestration method 300 and the service orchestration-based flow processing method 200 may not be performed in the same computing device.
As shown in fig. 3, the method 300 begins at step S310. In step S310, in response to a first operation of triggering service orchestration, a service orchestration interface is displayed, the service orchestration interface including an element list area and a canvas work area, the element list area including one or more element lists, the element lists including at least one element, and the element corresponding to data in the capability table.
According to an embodiment of the present invention, the first operation is generally a click operation for a service layout button, for example, multiple processes are displayed in a process management list, each process corresponds to one service layout button, and if a process to be laid out is selected, the service layout can be triggered by clicking the service layout button of the process, and a corresponding service layout interface is displayed. Certainly, each flow in the flow management list needs to be created in advance, for example, a flow is newly created in the management background, basic information of the flow is filled in and stored in the flow table, and the flow table refers to the position of the database layer for storing the flow.
In this embodiment, the component list is a display relationship of the capability table on the page, and is in a one-to-one correspondence, that is, one component corresponds to data in one capability table. For example, the list of elements may be a base domain-base capability, which includes 3 elements, http service, groovy service, and bean service, respectively.
Subsequently, step S320 is entered, and in response to a second operation of dragging each element related to the current flow in the element list area to the canvas work area, nodes corresponding to the dragged elements are displayed in the canvas work area. For example, for each element in the element list area related to the current flow, click the element and drag the element to the canvas work area, at which time the node corresponding to the element is displayed in the canvas work area.
In step S330, node information of each node is determined in response to a third operation of information configuration for each node.
According to an embodiment of the present invention, the service orchestration interface further includes a basic configuration area, and the node information of each node can be determined in response to a third operation of configuring information for each node as follows. In this embodiment, for any node, in response to a selection operation on the node, an information configuration page corresponding to the node is displayed in the basic configuration area, and in response to a configuration operation on the node information in the information configuration page, the node information of the node is determined.
The information configuration page generally comprises non-editable information such as node types and node coordinates and editable information such as node names, the nodes can be updated by configuring the editable information such as the node names, and further, the related information of the information configuration page is determined as the node information of the nodes.
In step S340, in response to the dragging operation between the nodes in the canvas work area, the connection relationship between the nodes is determined.
According to one embodiment of the invention, the connection relationship between the nodes can be determined in response to the dragging operation between the nodes in the canvas workspace as follows. In this embodiment, for any node, in response to a drag operation from the node to another node, when a preset condition is met, a directed connection line from the node to another node is displayed in the canvas work area according to a drag direction, and a connection relationship between the nodes is determined based on the directed connection line between the nodes.
For example, the canvas workspace has 3 nodes, which are node 1, node 2 and node 3, for node 1, in response to a drag operation from node 1 to node 2, the directional connecting lines of node 1 to node 2 are displayed in the drag direction, and in response to a drag operation from node 1 to node 3, the directional connecting lines of node 1 to node 3 are displayed in the drag direction. Finally, the connection relation between the 3 nodes is determined based on the directed connection lines between the nodes 1 and 2 and between the nodes 1 and 3.
Finally, step S350 is executed to generate flow metadata of the current flow according to the node information and the connection relationship of each node, so as to complete corresponding service arrangement. The process metadata comprises a process code, a process identifier, node information and a connection relation.
The process code is used for uniquely identifying a process, and the process identifier is relative to the database, and as the process needs to be stored in the database persistently after the operation on the page is finished, an identifier is generated at the level of the database, is generated by self increment of the database table and has uniqueness, and is recorded as the process identifier. With the flow identification, the corresponding flow can be opened and edited again.
According to an embodiment of the present invention, the service orchestration interface further includes a toolbar area, the toolbar area includes a flow saving control, and the method 300 further includes: and responding to the clicking operation of the flow saving control, and storing the flow metadata of the current flow into the flow table so as to save the current flow.
FIG. 4 shows a schematic diagram of a service orchestration interface according to one embodiment of the invention. As shown in FIG. 4, the service orchestration interface includes an element list region, a canvas work region, a base configuration region, and a toolbar region. The toolbar area is located at the top of the service layout interface, and the component list area, the canvas work area and the basic configuration area are sequentially arranged below the toolbar area from left to right.
In order to display the layout of the service layout interface and ensure the simplicity and clarity of the diagram as much as possible, all the elements in the element list area, all the nodes and the connection relationships in the canvas work area, the information configuration page of the basic configuration area, the process saving control of the toolbar area, and the like are not shown in fig. 4.
When the service arrangement interface is used for carrying out service arrangement, all elements related to the current process in the element list area are dragged to the canvas working area, so that nodes corresponding to the dragged elements are displayed in the canvas working area, and all nodes in the canvas working area are subjected to information configuration on an information configuration page of the basic configuration area, so that node information of all nodes is determined. And dragging the nodes in the canvas working area to clarify the connection relation among the nodes, and clicking the flow saving control in the toolbar area to save the current flow.
According to an embodiment of the invention, the method 300 further comprises: and binding and associating the current process with the corresponding service identity, and storing the association information into a process and service identity relation table.
In this embodiment, a series of operations between the process and the service identity, including binding and unbinding, may be performed on the service identity binding interface. And if the binding is selected, associating the corresponding service identity with the flow code of the current flow, and generating and storing association information into a flow and service identity relation table. If the bound process and the business identity are to be unbound, the associated information of the corresponding business identity and the process code in the relation table of the process and the business identity is deleted after the binding is selected and unbound.
According to an embodiment of the invention, the method 300 further comprises: and responding to the issuing operation of the current flow, writing the flow information of the current flow into a preset service middleware, and storing according to a storage path corresponding to the flow code of the current flow, wherein the flow information comprises basic information, associated information and flow metadata.
In this embodiment, the service middleware adopts ZooKeeper, and then selects a current flow in the flow management list and clicks a corresponding flow issue button, and writes the flow information into a storage path in the ZooKeeper, where the storage path is in a flow/flow code format, that is, when the flow code is a flow _ cancel _ order, the flow information of the corresponding flow is stored into the ZooKeeper in the flow/flow _ cancel _ order path.
After the management background finishes the service arrangement of the current flow and releases the flow, the ZooKeeper monitor established before the flow execution engine receives the change information. Based on this, according to an embodiment of the present invention, the method 200 further comprises: and if the monitor monitors that the process information of the corresponding process is changed, acquiring the changed process information from the service middleware, analyzing the changed process information and writing the analyzed changed process information into the memory of the virtual machine. Of course, the flow metadata in the changed flow information is subsequently read from the virtual machine memory and written into the context container of the flow execution engine, so that the consistency of the flow information is maintained at all times.
After the process loading is completed, the business system can call the process. According to yet another embodiment of the invention, the method 200 further comprises: and determining the service identity, calling a process execution engine to search a process corresponding to the service identity from the process and service identity relation table through the process execution engine, and operating the searched process.
In this embodiment, the database may be queried according to the current order number, the service identity of the corresponding order may be determined, the execution interface of the process execution engine may be invoked, and the domain event and the service identity may be transmitted. And the process execution engine searches a process and service identity relation table according to the field event and the service identity to acquire a corresponding process, operates the process, and directly reports an error if the process is not found.
According to the service arrangement scheme provided by the embodiment of the invention, when service arrangement is carried out, elements can be dragged and connected with nodes through the service arrangement interface, a visual service arrangement workbench is provided, the capacity of a middle platform can be efficiently multiplexed through arrangement and recombination, the expandability of the system is increased, the arrangement efficiency is greatly improved, the workload of code and configuration file level arrangement is reduced, errors are not easy to occur, and the service arrangement scheme is friendly to novice. When the process is processed based on service arrangement, the service middleware is used for creating the listener to monitor the dynamic state of the process, so that the problems of data consistency and reliability of the database and the process execution engine are solved, the consumption of database query is effectively reduced through a mechanism of writing process information into a virtual machine memory, the efficiency of reading process information is ensured, and the process execution performance is improved.
A5, the method of any one of A1-A4, further comprising:
and binding and associating the current process with the corresponding service identity, and storing the associated information into a process and service identity relation table.
A6, the method of any one of A1-A5, further comprising:
and responding to the issuing operation of the current flow, writing the flow information of the current flow into a preset service middleware, and storing the flow information according to a storage path corresponding to the flow code of the current flow, wherein the flow information comprises basic information, associated information and flow metadata.
B11, the method as in any one of B7-B10, further comprising:
and reading the process metadata from the memory of the virtual machine, and writing the process metadata into a context container of the process execution engine.
B12, the method as in any one of B7-B11, further comprising:
determining the service identity;
and calling the process execution engine to search the process corresponding to the service identity from the process and service identity relation table through the process execution engine, and running the searched process.
According to the technical scheme, on one hand, a service arrangement workbench is provided, on the other hand, a process execution technology after service arrangement is provided, the process of the middle platform and the service logic are decoupled, service isolation is achieved, and capability multiplexing and flexibility are achieved truly. In practical application, a visual management background is used for service arrangement, apache cache serves as a flow execution engine, and the flow after service arrangement is processed, so that the sharing multiplexing capability of the business center and the iteration efficiency of research and development tests are improved, the normal operation of transaction business is ensured, the business requirements are quickly responded, the cost reduction and efficiency improvement of the center are facilitated, and the industry competitiveness of an enterprise is finally improved.
The various techniques described herein may be implemented in connection with hardware or software or, alternatively, with a combination of both. Thus, the methods and apparatus of the present invention, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as removable hard drives, U.S. disks, floppy disks, CD-ROMs, or any other machine-readable storage medium, wherein, when the program is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
In the case of program code execution on programmable computers, the computing device will generally include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Wherein the memory is configured to store program code; the processor is configured to execute the service orchestration method or the service orchestration-based flow processing method of the present invention according to instructions in the program code stored in the memory.
By way of example, and not limitation, readable media may comprise readable storage media and communication media. Readable storage media store information such as computer readable instructions, data structures, program modules or other data. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. Combinations of any of the above are also included within the scope of readable media.
In the description provided herein, algorithms and displays are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with examples of this invention. The required structure for constructing such a system is apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: rather, the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules or units or components of the devices in the examples disclosed herein may be arranged in a device as described in this embodiment, or alternatively may be located in one or more devices different from the device in this example. The modules in the foregoing examples may be combined into one module or may be further divided into multiple sub-modules.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Moreover, those skilled in the art will appreciate that although some embodiments described herein include some features included in other embodiments, not others, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
Furthermore, some of the described embodiments are described herein as a method or combination of method elements that can be performed by a processor of a computer system or by other means of performing the described functions. A processor having the necessary instructions for carrying out the method or method elements thus forms a means for carrying out the method or method elements. Further, the elements of the apparatus embodiments described herein are examples of the following apparatus: the apparatus is used to implement the functions performed by the elements for the purpose of carrying out the invention.
As used herein, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this description, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as disclosed herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The present invention has been disclosed with respect to the scope of the invention, which is to be considered as illustrative and not restrictive, and the scope of the invention is defined by the appended claims.

Claims (10)

1. A service orchestration method, comprising:
in response to a first operation of triggering service orchestration, displaying a service orchestration interface, the service orchestration interface comprising an element list area and a canvas work area, the element list area comprising one or more element lists, the element lists comprising at least one element, the element corresponding to data in a capability table;
in response to a second operation of dragging each element related to the current flow in the element list area to the canvas working area, displaying nodes corresponding to the dragged elements in the canvas working area;
responding to a third operation of information configuration on each node, and determining node information of each node;
responding to the dragging operation between the canvas work area and the nodes, and determining the connection relation between the nodes;
and generating flow metadata of the current flow according to the node information of each node and the connection relation so as to complete corresponding service arrangement.
2. The method of claim 1, wherein the service orchestration interface further comprises a basic configuration area, and the step of determining node information of each node in response to a third operation of configuring information of each node comprises:
for any node, responding to the selected operation of the node, and displaying an information configuration page corresponding to the node in the basic configuration area;
and responding to the configuration operation of the node information on the information configuration page, and determining the node information of the node.
3. The method of claim 1 or 2, wherein the step of determining the connection relationship between the nodes in response to the dragging operation between the nodes in the canvas workspace comprises:
responding to the dragging operation from any node to another node, and displaying a directed connecting line from the node to the another node in the canvas working area according to the dragging direction when the dragging operation meets the preset condition;
and determining the connection relation among the nodes based on the directed connection lines among the nodes.
4. The method of any of claims 1-3, wherein the service orchestration interface further comprises a toolbar region thereon, the toolbar region comprising a flow save control, the method further comprising:
and responding to the click operation of the process saving control, and storing the process metadata of the current process into a process table so as to save the current process.
5. A flow processing method based on service arrangement comprises the following steps:
starting a business system, scanning the class of the target annotation label through the business system to extract relevant fields and store the relevant fields in a capability table;
establishing connection with a preset service middleware based on a flow execution engine embedded in the business system;
reading each child node under the corresponding storage path from the service middleware, and creating a listener of a process corresponding to each child node for process monitoring;
and analyzing the process information of the corresponding process of each child node and writing the process information into a virtual machine memory, wherein the process information comprises basic information, associated information and process metadata.
6. The method of claim 5, wherein the target annotation comprises a capability annotation and an extended annotation, and the step of scanning the class of the target annotation by the business system to extract the relevant field and store the relevant field in the capability table comprises:
scanning the class marked by the capability annotation through the business system, analyzing the capability field in the capability annotation, and storing the capability field in a capability table;
and scanning the class of the extended annotation label through the service system, analyzing the extended field in the extended annotation, and storing the extended field in the capability table.
7. The method of claim 6, further comprising:
creating the capability annotation in advance, wherein capability fields in the capability annotation comprise field codes, capability names and capability descriptions;
and for the code of the business system, adding the capability annotation to the abstracted capability class, and perfecting the attribute information of the capability annotation.
8. The method of any of claims 5-7, further comprising:
if the listener monitors that the process information of the corresponding process is changed, the changed process information is obtained from the service middleware;
and analyzing the changed flow information and writing the flow information into the memory of the virtual machine.
9. A computing device, comprising:
at least one processor; and
a memory storing program instructions, wherein the program instructions are configured to be adapted to be executed by the at least one processor, the program instructions comprising instructions for performing the method of any of claims 1-4 or the method of any of claims 5-8.
10. A readable storage medium storing program instructions that, when read and executed by a computing device, cause the computing device to perform the method of any of claims 1-4 or the method of any of claims 5-8.
CN202211235877.XA 2022-10-10 2022-10-10 Service arrangement method, flow processing method based on service arrangement and computing equipment Pending CN115712375A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211235877.XA CN115712375A (en) 2022-10-10 2022-10-10 Service arrangement method, flow processing method based on service arrangement and computing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211235877.XA CN115712375A (en) 2022-10-10 2022-10-10 Service arrangement method, flow processing method based on service arrangement and computing equipment

Publications (1)

Publication Number Publication Date
CN115712375A true CN115712375A (en) 2023-02-24

Family

ID=85231001

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211235877.XA Pending CN115712375A (en) 2022-10-10 2022-10-10 Service arrangement method, flow processing method based on service arrangement and computing equipment

Country Status (1)

Country Link
CN (1) CN115712375A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117806611A (en) * 2024-02-29 2024-04-02 鱼快创领智能科技(南京)有限公司 Method for creating new service interface based on visual automatic arrangement of interface discovery

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117806611A (en) * 2024-02-29 2024-04-02 鱼快创领智能科技(南京)有限公司 Method for creating new service interface based on visual automatic arrangement of interface discovery
CN117806611B (en) * 2024-02-29 2024-05-14 鱼快创领智能科技(南京)有限公司 Method for creating new service interface based on visual automatic arrangement of interface discovery

Similar Documents

Publication Publication Date Title
CN108920135B (en) User-defined service generation method and device, computer equipment and storage medium
CN110727431A (en) Applet generation method and apparatus
Hausmann et al. Model-based discovery of Web Services
CN109814854B (en) Project framework generation method, device, computer equipment and storage medium
CN106233252A (en) For customizing the dynamic update contruction device of software
CN111488174A (en) Method and device for generating application program interface document, computer equipment and medium
CN107679832A (en) Task management method, device and server
US20180341633A1 (en) Providing action associated with event detected within communication
CN101883084A (en) Method, adaptor and adaptor system for adapting to network service communication,
CN110580189A (en) method and device for generating front-end page, computer equipment and storage medium
CN111068328A (en) Game advertisement configuration table generation method, terminal device and medium
CN113971037A (en) Application processing method and device, electronic equipment and storage medium
CN113312033A (en) Template protocol generation and management method
CN114594927A (en) Low code development method, device, system, server and storage medium
CN115712375A (en) Service arrangement method, flow processing method based on service arrangement and computing equipment
CN106600226A (en) Method and device used for optimizing flow management system
CN116346609A (en) Method for realizing route configuration and parameter binding based on go language
CN114048514B (en) Electronic signing workflow engine generation method and update package embedding method
CN112181407B (en) Service realization processing method, device, system, electronic equipment and storage medium
CN111464429B (en) WeChat applet multi-item compatible sharing method, system, storage medium and equipment
CN110516169B (en) Data display method, device and system and computing equipment
CN113495723A (en) Method and device for calling functional component and storage medium
CN112330304A (en) Contract approval method and device
CN112053137A (en) Flow prediction method, electronic device and server cluster
CN112613792A (en) Data processing method, system, computer device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination