CN115080609A - Method and system for realizing high-performance and high-reliability business process engine - Google Patents

Method and system for realizing high-performance and high-reliability business process engine Download PDF

Info

Publication number
CN115080609A
CN115080609A CN202110267028.1A CN202110267028A CN115080609A CN 115080609 A CN115080609 A CN 115080609A CN 202110267028 A CN202110267028 A CN 202110267028A CN 115080609 A CN115080609 A CN 115080609A
Authority
CN
China
Prior art keywords
data
cache
flow
server
library
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110267028.1A
Other languages
Chinese (zh)
Inventor
韩光
冯文化
宋乃丰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China National Software & Service Co ltd
Original Assignee
China National Software & Service Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China National Software & Service Co ltd filed Critical China National Software & Service Co ltd
Priority to CN202110267028.1A priority Critical patent/CN115080609A/en
Publication of CN115080609A publication Critical patent/CN115080609A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/214Database migration support
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/219Managing data history or versioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2308Concurrency control
    • G06F16/2336Pessimistic concurrency control approaches, e.g. locking or multiple versions without time stamps
    • G06F16/2343Locking methods, e.g. distributed locking or locking implementation details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2365Ensuring data consistency and integrity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24553Query execution of query operations
    • G06F16/24554Unary operations; Data partitioning operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • G06F9/5088Techniques for rebalancing the load in a distributed system involving task migration

Abstract

The invention relates to a method and a system for realizing a high-performance and high-reliability business process engine. The invention realizes the clustered deployment and containerized deployment of the business process engine, can more fully utilize hardware resources and realize high-efficiency expansion; performance bottlenecks caused by excessive database operation are avoided through the cache service, and the aim of improving the system performance is achieved; the runtime process data of the production environment is guaranteed by separating the runtime library from the historical library, the data of the runtime library can be guaranteed to be always controlled to be in a constant decimal order as time goes on, and high-performance database query in the process of flow circulation is achieved; by carrying out redundancy design in the data table of the process, the required process information can be obtained by single-table query in the process of process circulation, and the associated query between the tables is avoided, so that the CPU consumption is reduced. The invention can realize a high-performance and high-availability business process engine.

Description

Method and system for realizing high-performance and high-reliability business process engine
Technical Field
The invention belongs to the field of computers, relates to a high-performance and high-reliability business process engine implementation method and system, and is an implementation scheme based on the characteristics of a business process engine.
Background
The business process management system is a system which completes process definition, management and execution through the execution of software under the drive of process formal representation, and the main aim of the system is to manage the sequence of each activity in the business process and the call of the resources related to the activity so as to realize the automation of the business process.
Traditional business processes are fixed in application systems in a hard-coded manner, and in the case of changes of the business processes and organizations, the system needs to be modified significantly or even redesigned. The business process management system solves the problems, and application developers design the process through a visual process modeling tool, analyze the flow of the process and the generation of tasks through a business process engine, and make the information of the participants of the process into a configurable mode. When the flow changes, the method can adapt to the change of the flow with little or no modification to the application. The business process engine is the core of the business process management system and is mainly responsible for process analysis, flow and task circulation and scheduling.
At present, more and more enterprises construct a unified application support platform, which comprises unified user management, unified resource management, a unified business process management system and the like, and the business process management system provides unified process service for the outside. In addition, many enterprises build enterprise middle strategy at present, and a business process management system is made into a technical middle, so that an enterprise-level process capability reuse platform is provided. As the number of accessed systems increases, the number of users using the systems increases, and a business process engine is required to provide high-performance and high-availability process services, so that it is imperative to construct a high-performance and high-availability business process engine.
Disclosure of Invention
Aiming at the problems, the invention provides a high-performance and high-reliability business process engine implementation method and system by combining the application characteristics of the business process.
A method for realizing a high-performance and high-reliability business process engine comprises the following steps:
deploying a business process engine in a server cluster, and deploying an application program in a server in a container;
storing the data of the defined class in the business process data into a cache server;
separating the operation library from the history library, and migrating the closed-state flow data to the history library;
and establishing a redundant data table for the service process, and acquiring corresponding data through single table query in the service process flow transfer process.
Furthermore, the business process engine is deployed in the server cluster, and the clustered deployment is supported by making the business process engine into a stateless process service, the principle is that the process service does not store a data state, and all states are stored in an independent external storage; the stateless state is embodied in:
storing the session information into a redis distributed cache server;
the business process engine does not allow the same process instance to be submitted and returned, global distributed locking is realized through redis or zookeeper, and the process service is prevented from storing the lock state information;
the files of the application are placed in a separate file storage server.
Furthermore, in the server cluster, a reverse proxy is deployed in the application server, and the reverse proxy and load balancing are realized through Nginx; the request of the client is sent to a reverse proxy firstly, and the reverse proxy forwards the request to an application server to realize the transverse expansion capability of the business process engine; and single-point faults are eliminated through load balancing, and fault tolerance and seamless switching capacity are provided.
Further, the cache server puts the process definition and the process configuration information to be cached into a distributed cache redis or memcached, so as to achieve the goal of improving the system performance.
Further, the data of the flow definition class is firstly obtained from the cache server during reading, if the data exists in the cache server, the data is directly returned, and if the data does not exist, the data is obtained from the database; if the data exists in the database, the data is placed into a cache server; when the data of the flow definition class is modified and deleted, corresponding information is emptied from the cache server while the data of the database is modified, and the data consistency of the cache and the database is ensured.
Further, after the database is operated, the cache is deleted; if the cache is the non-primary key cache, the original non-primary key cache key is inquired, and then cache clearing operation is carried out.
Further, the separating the runtime library and the history library, and migrating the flow data in the closed state to the history library includes: when the flow is normally submitted to the end, the flow is invalidated or the flow is forcibly finished, the flow data is migrated from the operation library to the history library; and migrating the flow data from the history library to the running library when the flow is revived.
Further, the establishing of the redundant data table for the business process includes: the process data is divided into process definitions, process examples, activity examples and work item examples, the process data is gradually more detailed from left to right, fields of a left database table are redundant in a right database table, required process information can be obtained through single table query, and associated query between tables is avoided.
A high-performance and high-reliability business process engine implementation system adopting the method comprises a server cluster, wherein the business process engine is deployed in the server cluster, and an application program in the server is deployed in a container; a cache module, a migration module and a redundancy module are deployed in the server cluster, wherein:
the cache module stores the data of the defined class in the business process data into a cache server;
the migration module separates the operation library and the historical library of the application program and migrates the service flow data in the closed state into the historical library;
the redundancy module establishes a redundant data table for the service process, and acquires corresponding data through single table query in the service process flow transfer process.
The invention has the beneficial effects that:
1) compared with the traditional virtual machine deployment, the containerized deployment of the invention is lighter, a single physical machine can bear more containers, the hardware resources can be more fully utilized, and the efficient capacity expansion is realized.
2) The method comprises the steps that a reverse proxy is deployed in front of an application server to achieve reverse proxy and load balancing, a request of a client is sent to the reverse proxy firstly, and the reverse proxy forwards the request to the application server to achieve the transverse expansion capability of a business process engine; single point of failure is eliminated through load balancing, fault tolerance and seamless switching capability are provided, influence caused by errors of hardware and software is minimized, and high availability capability is provided; all modules of the whole system are guaranteed to have no single-point fault, and high availability of the whole business process engine is achieved.
3) The cache service of the invention can avoid excessive database operation, thereby avoiding frequent io requests in the process of re-requesting to cause performance bottleneck, putting the flow definition and flow configuration information needing caching into a distributed cache redis or memcached, and achieving the aim of improving the high performance of the system.
4) The invention separates the operation library from the history library, can ensure the operation process data of the production environment, can ensure the data of the operation library to be always controlled at a constant decimal magnitude along with the time, realizes the high-performance database query in the process of flow circulation, and achieves the high performance of the whole business application.
5) The invention carries out redundancy design in the data sheet of the process, and can improve the performance of the business process engine by changing time in space; the single-table query can be realized in the process of flow circulation, the required flow information can be obtained through the single-table query, and the association query between the tables is avoided, so that the CPU consumption is reduced, and the high performance of the business flow engine is realized.
Drawings
FIG. 1 is a flow chart of cache fetch.
FIG. 2 is a flow chart of cache modification and deletion.
FIG. 3 is a schematic diagram of the separation of the runtime and historians.
Fig. 4 is a schematic diagram of redundancy.
Detailed Description
To facilitate the understanding and practice of the present invention by those of ordinary skill in the art, a detailed description of specific embodiments of the method is provided below.
The invention solves the problems of high performance and high availability of the business process from the following aspects by combining the application characteristics of the business process.
Firstly, clustering deployment and containerization deployment.
Clustered deployment means that each server in a cluster is called a node of the cluster, and all nodes form a cluster. Each node in the cluster deploys the same flow service.
The service process engine is made into a stateless process service to support clustering deployment, each server node deploys the service process engine, and the principle is that the process service does not store data states, and all the states are stored in an independent external storage.
The stateless state is embodied in the following aspects:
session information (session control information). And storing the session information into a redis distributed cache server.
2. A distributed lock. The business process engine does not allow both commit and rollback operations to be performed on the same process instance. A global distributed lock is needed, and the global distributed lock is realized through redis or zookeeper, so that the process service is prevented from storing the lock state information.
3. A file. The file refers to the file of the application program, and the file is placed in an independent file storage server.
The containerized deployment refers to the deployment of an application program into a Docker container, and compared with the traditional virtual machine deployment, the containerized deployment is lighter in weight, a single physical machine can bear more containers, hardware resources can be utilized more fully, and efficient capacity expansion is realized.
The method comprises the steps that a reverse proxy is deployed in an application server (namely, a server where an application program is located, and the server is one server in a server cluster), the reverse proxy and load balancing are achieved through Nginx (a high-performance HTTP and reverse proxy web server), a request of a client is sent to the reverse proxy firstly, and the reverse proxy forwards the request to the application server, so that the transverse expansion capability of a business process engine is achieved. The single point of failure is eliminated through load balancing, fault tolerance and seamless switching capacity are provided, influences caused by errors of hardware and software are minimized, and high availability is provided. All modules of the whole system are guaranteed to have no single-point fault, and high availability of the whole business process engine is achieved.
And secondly, caching service.
Data of the business process system can be divided into two types, wherein one type is data of a definition type, such as a flow chart and flow configuration information. One type is data of an instance type, such as data of a process instance, an activity instance, a work item, and the like in the process of process flow circulation. The data of the flow definition class belongs to a typical scene with more reads and less writes, and is suitable for storing the part of information into a cache server (one server in a server cluster), so that excessive database operation is avoided, the problem that performance bottleneck is caused by io requests frequently caused in the re-request process is avoided, and the flow definition and flow configuration information needing to be cached are put into a distributed cache redis or memcached, so that the aim of improving the high performance of the system is achieved.
And when the data of the flow definition class is read, the data is firstly obtained from the cache server, if the data exists in the cache server, the data is directly returned, if the data does not exist, the data is obtained from the database, and if the data exists in the database, the data is placed into the cache server. When the data of the flow definition class is modified and deleted, the information is emptied from the cache server while the data of the database is modified, so that the data consistency of the cache and the database is ensured.
And thirdly, separating the operation library from the history library.
A runtime library is a library file that an application needs at runtime. The history library refers to a backup library.
The states of the flow instance include an open state and a closed state, the open state including an open. The flow application is characterized in that the flow data in the closed state can be used only when task processing inquiry or statistical analysis is carried out, so that the data in the closed state can be migrated to a historical library at the closing moment, the flow data in the production environment can be ensured, the data in the operation library can be ensured to be always controlled to be in a constant small magnitude order along with the time, and the high-performance database inquiry in the flow circulation process is realized.
By providing process data archiving and monitoring (namely monitoring archived process data), the separation of the operation library and the history library of the process-related business data can be realized, so that the performance of a business system is improved, and the high performance of the whole business application is achieved.
Redundancy
And carrying out redundancy design in a flow data table, and improving the performance of a business flow engine by changing time in space. The process data is divided into process definition- > process example- > activity example- > work item example, the process data is gradually more detailed from left to right, fields of a left database table are redundant in a right database table, for example, a process definition id is stored in a process example table, and a process example id and a process definition id are stored in an activity example table, so that single-table query can be realized in the process of process circulation, required process information can be acquired through the single-table query, associated query between the tables is avoided, CPU consumption is reduced, and high performance of a business process engine is realized.
In conclusion, the invention can realize the high-performance and high-availability business process engine.
In an embodiment of the present invention, a method for implementing a high-performance and high-reliability business process engine is provided, which includes the following specific steps:
caching service
The cache acquisition principle of the present invention, as shown in fig. 1, includes the following steps:
1) the client accesses the query service;
2) the query service is obtained in the cache first;
3) if the information can be found in the cache, directly returning to the client;
4) if the cache does not exist, obtaining the data in the database;
5) and if the record is queried in the database, putting the query result into a cache and returning the query result to the client.
The cache purging mechanism of the present invention, which ensures the transaction consistency between the cache and the database, as shown in fig. 2, includes the following steps:
1) the client accesses the modify or delete logic (modify or delete service in fig. 2); modifying or deleting logic means judging whether a record is modified or deleted;
2) modifying or deleting the database;
3) and then delete the cached record.
There are two points to note here:
1. if the cache is cleared firstly under the condition of high concurrency and the database is not operated yet, another thread calls the query service at the moment, and the original record is put into the cache after the query is successful, so that the cache is inconsistent with the database.
2. If the cache is a non-primary key, the original non-primary key cache key needs to be queried first, and cache clearing operation is performed, because the non-primary key value transmitted from the client may be a modified value.
Second, separating the operation base from the history base
As shown in FIG. 3, process data (process instances, activity instances, work items, etc.) is migrated from the runtime to the historian when the process is normally committed to completion, the process is voided, or the process is forced to complete. And migrating the process data from the history library to the operation library when the process is restored.
Triple, redundant
The process data is divided into process definitions- > process instances- > activity instances- > work item instances, the process data is progressively more detailed from left to right, and the fields of the left database table are redundant in the right database table. As shown in fig. 4, the process instance redundantly stores the modelId (model Id), pkgdid (packet Id), pkgvversion (packet version number), the activity instance redundantly stores the information of the process definition, the prosinstid (process instance Id), proDefId (process definition Id), etc. of the process instance, the work item redundantly stores the information of the process instance, and the actInstId (activity instance Id), actidefid (activity definition Id), etc. of the activity instance, so as to ensure that corresponding data can be obtained through single-table query in the process of the process flow of the business process engine, thereby implementing high performance of the process engine. The workitem id is represented by the workitem id in the workitem of FIG. 4.
Another embodiment of the present invention provides a high-performance and high-reliability business process engine implementation system using the above method, which includes a server cluster, where the business process engine is deployed in the server cluster, and an application program in the server is deployed in a container; a cache module, a migration module and a redundancy module are deployed in the server cluster, wherein:
the cache module stores the data of the defined class in the business process data into a cache server;
the migration module separates the operation library and the historical library of the application program and migrates the service flow data in the closed state into the historical library;
the redundancy module establishes a redundant data table for the service process, and acquires corresponding data through single table query in the service process flow transfer process.
The foregoing disclosure of the specific embodiments of the present invention and the accompanying drawings is directed to an understanding of the present invention and its implementation, and it will be appreciated by those skilled in the art that various alternatives, modifications, and variations may be made without departing from the spirit and scope of the invention. The present invention should not be limited to the disclosure of the embodiments and drawings in the specification, and the scope of the present invention is defined by the scope of the claims.

Claims (9)

1. A method for realizing a high-performance and high-reliability business process engine is characterized by comprising the following steps:
deploying a business process engine in a server cluster, and deploying an application program in a server in a container;
storing the data of the defined class in the business process data into a cache server;
separating the operation library from the history library, and migrating the flow data in the closed state to the history library;
and establishing a redundant data table for the service process, and acquiring corresponding data through single table query in the service process flow transfer process.
2. The method of claim 1, wherein the deploying of the business process engine in the server cluster is performed by making the business process engine a stateless process service to support clustered deployment, wherein the principle is that the process service does not store data states, and all the states are stored in an independent external storage; the stateless state is embodied in:
storing the session information into a redis distributed cache server;
the business process engine does not allow the same process instance to be submitted and returned, global distributed locking is realized through redis or zookeeper, and the process service is prevented from storing the lock state information;
the files of the application are placed in a separate file storage server.
3. The method according to claim 1, wherein in the server cluster, a reverse proxy is deployed in front of an application server, and the reverse proxy and the load balancing are realized through Nginx; the request of the client is sent to a reverse proxy firstly, and the reverse proxy forwards the request to an application server to realize the transverse expansion capability of the business process engine; and single point of failure is eliminated through load balancing, and fault tolerance and seamless switching capability are provided.
4. The method according to claim 1, wherein the cache server puts the flow definition and the flow configuration information that need to be cached into a distributed cache redis or memcached, so as to achieve the goal of improving system performance.
5. The method of claim 1, wherein the data of the flow definition class is first obtained from a cache server during reading, and if the data exists in the cache server, the data is directly returned, and if the data does not exist, the data is obtained from a database; if the data exists in the database, the data is placed into a cache server; when the data of the flow definition class is modified and deleted, corresponding information is emptied from the cache server while the data of the database is modified, and the data consistency of the cache and the database is ensured.
6. The method of claim 5, wherein the cache is deleted after the database is operated; if the cache is the non-primary key cache, the original non-primary key cache key is inquired, and then cache clearing operation is carried out.
7. The method of claim 1, wherein separating the runtime library from the historian library and migrating the flow data in the closed state to the historian library comprises: when the flow is normally submitted to the end, the flow is invalidated or the flow is forcibly finished, the flow data is migrated from the operation library to the history library; and migrating the flow data from the history library to the running library when the flow is revived.
8. The method of claim 1, wherein the creating a redundant data table for the business process comprises: the process data is divided into process definitions, process examples, activity examples and work item examples, the process data is gradually more detailed from left to right, fields of a left database table are redundant in a right database table, required process information can be obtained through single table query, and associated query between tables is avoided.
9. A high-performance and high-reliability business process engine implementation system adopting the method of any one of claims 1 to 8 is characterized by comprising a server cluster, wherein the business process engine is deployed in the server cluster, and an application program in the server is deployed in a container; a cache module, a migration module and a redundancy module are deployed in the server cluster, wherein:
the cache module stores the data of the defined class in the business process data into a cache server;
the migration module separates the operation library and the historical library of the application program and migrates the service flow data in the closed state into the historical library;
the redundancy module establishes a redundant data table for the service process, and acquires corresponding data through single table query in the service process flow transfer process.
CN202110267028.1A 2021-03-10 2021-03-10 Method and system for realizing high-performance and high-reliability business process engine Pending CN115080609A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110267028.1A CN115080609A (en) 2021-03-10 2021-03-10 Method and system for realizing high-performance and high-reliability business process engine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110267028.1A CN115080609A (en) 2021-03-10 2021-03-10 Method and system for realizing high-performance and high-reliability business process engine

Publications (1)

Publication Number Publication Date
CN115080609A true CN115080609A (en) 2022-09-20

Family

ID=83240417

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110267028.1A Pending CN115080609A (en) 2021-03-10 2021-03-10 Method and system for realizing high-performance and high-reliability business process engine

Country Status (1)

Country Link
CN (1) CN115080609A (en)

Similar Documents

Publication Publication Date Title
US11120152B2 (en) Dynamic quorum membership changes
US20210117401A1 (en) System And Method For Large-Scale Data Processing Using An Application-Independent Framework
US10891267B2 (en) Versioning of database partition maps
JP3851272B2 (en) Stateful program entity workload management
US7237027B1 (en) Scalable storage system
US9569475B2 (en) Distributed consistent grid of in-memory database caches
US8484417B2 (en) Location updates for a distributed data store
US8386540B1 (en) Scalable relational database service
US7512639B2 (en) Management of time-variant data schemas in data warehouses
US7650331B1 (en) System and method for efficient large-scale data processing
US20170024315A1 (en) Efficient garbage collection for a log-structured data store
CN101930472A (en) Parallel query method for distributed database
CN111949454B (en) Database system based on micro-service component and related method
US20080126706A1 (en) Recoverable Cache Preload in Clustered Computer System
KR20110082529A (en) Partition management in a partitioned, scalable, and available structured storage
CN102158540A (en) System and method for realizing distributed database
US20210004712A1 (en) Machine Learning Performance and Workload Management
CN103455526A (en) ETL (extract-transform-load) data processing method, device and system
CN110727709A (en) Cluster database system
CN113377868A (en) Offline storage system based on distributed KV database
CN112596762A (en) Rolling upgrading method and device
CN103150225B (en) Disk full abnormity fault tolerance method of object parallel storage system based on application level agent
US7249140B1 (en) Restartable scalable database system updates with user defined rules
EP3377970B1 (en) Multi-version removal manager
CN115080609A (en) Method and system for realizing high-performance and high-reliability business process engine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination