CN110032571B - Business process processing method and device, storage medium and computing equipment - Google Patents

Business process processing method and device, storage medium and computing equipment Download PDF

Info

Publication number
CN110032571B
CN110032571B CN201910313907.6A CN201910313907A CN110032571B CN 110032571 B CN110032571 B CN 110032571B CN 201910313907 A CN201910313907 A CN 201910313907A CN 110032571 B CN110032571 B CN 110032571B
Authority
CN
China
Prior art keywords
node
instance
business process
database
service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910313907.6A
Other languages
Chinese (zh)
Other versions
CN110032571A (en
Inventor
马海刚
邓磊
马维宁
常震华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910313907.6A priority Critical patent/CN110032571B/en
Publication of CN110032571A publication Critical patent/CN110032571A/en
Application granted granted Critical
Publication of CN110032571B publication Critical patent/CN110032571B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2379Updates performed during online database operations; commit processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/103Workflow collaboration or project management
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

According to the business process processing method, the business process processing device, the storage medium and the computer equipment, the first database is used for caching the example data of the unfinished business process example, the second database is used for storing the example data of the finished business process example, dynamic expansion of the memory space is realized, and the situation that all the example data are cached in the first database, occupy the storage space of the first database and influence the normal work of other business process examples is avoided; meanwhile, for the finished business process example, when the example data of the business process example needs to be consulted, the second database is directly read, so that the access times of the first database are reduced, and the usability of the writing service is improved; moreover, due to the high concurrency processing capacity of the first database, the method and the system for processing the instance data of the business process instance realize high concurrency of the instance data of the business process instance, and meanwhile guarantee data consistency.

Description

Business process processing method and device, storage medium and computing device
Technical Field
The present application relates to the field of communications technologies, and in particular, to a method and an apparatus for processing a non-business process, a storage medium, and a computer device.
Background
The engineering flow engine is used for taking workflow as a part of an application system, providing a core solution which has decision function on each application system and can transfer routing, content grade and the like according to different decision information of roles, division of labor and conditions. The workflow engine generally includes important functions of node management, flow direction management, process sample management, and the like of the process.
At present, the commonly used workflow engines include K2, activiti, and the like, which all belong to the workflow engine in the basic database transaction operation mode, and therefore, the workflow engine is limited by the concurrent processing capability of the database, and the concurrent processing capability of the existing workflow engine is general, for example, the K2 workflow engine shown in fig. 1 processes data amount per second in different scenes, and as can be directly seen from the last column of the table shown in fig. 1, the parallel processing performance is poor, and the transaction processing data per second is very low.
Disclosure of Invention
Embodiments of the present application provide a method and an apparatus for processing a business process, a storage medium, and a computer device, which implement high concurrency of instance data of a business process instance, and ensure consistency of the data, and implement dynamic expansion of a memory space by storing instance data in different states using two databases.
In order to achieve the above purpose, the embodiments of the present application provide the following technical solutions:
a business process processing method, the method comprising:
detecting the instance state of each node in the business process instance;
under the condition that the instance state indicates that the business process instance is not finished, caching instance data of the business process instance to a first database;
and under the condition that the instance state indicates that the business process instance is ended, synchronizing the instance data of the business process instance cached in the first database to a second database, and deleting the instance data of the business process instance cached in the first database.
A business process processing apparatus, the apparatus comprising:
the detection module is used for detecting the instance state of each node in the business process instance;
the first storage module is used for caching the instance data of the business process instance to a first database under the condition that the instance state indicates that the business process instance is not finished;
and the second storage module is used for synchronizing the example data of the business process example cached by the first database to a second database and deleting the example data of the business process example cached by the first database under the condition that the example state indicates that the business process example is ended.
A storage medium having stored thereon a computer program for execution by a processor to implement the steps of the business process processing method as described above.
A computer device, the computer device comprising:
a communication interface;
a memory for storing a computer program for implementing the business process processing method as described above;
and the processor is used for recording and executing the computer program stored in the memory, and the computer program is used for realizing the steps of the business process processing method.
Based on the above technical solutions, in the service flow processing method, the apparatus, the storage medium, and the computer device provided in the embodiments of the present application, the embodiment may use the first database to cache the example data of the unfinished service flow example, use the second database to store the example data of the finished service flow example, and synchronize the example data of the finished flow in the first database to the second database at the fixed time, thereby implementing dynamic expansion of the memory space, and avoiding that all the example data are cached in the first database, occupy the storage space of the first database, and affect the normal operation of other service flow examples; meanwhile, for the finished business process example, when the example data of the business process example needs to be consulted, the second database is directly read, so that the access times of the first database are reduced, and the usability of writing service is improved; moreover, due to the high concurrency processing capacity of the first database, the method and the system for processing the instance data of the business process instance realize high concurrency of the instance data of the business process instance, and meanwhile guarantee data consistency.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a diagram illustrating test results of a K2 workflow engine;
fig. 2 is a schematic structural diagram of a business process processing system according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of another business process processing system provided in the embodiment of the present application;
fig. 4 is a schematic flowchart of a service flow processing method according to an embodiment of the present application;
fig. 5 is a schematic flowchart of another business process processing method according to an embodiment of the present application;
FIG. 6 is a control class diagram provided in accordance with an embodiment of the present application;
fig. 7 is a service flow definition diagram provided in the embodiment of the present application;
fig. 8 is a schematic flow chart of a business process processing method in an application scenario for the business process definition diagram shown in fig. 7 according to an embodiment of the present application;
fig. 9 is a storage object relationship design diagram of a first database in a business process processing method according to an embodiment of the present application;
fig. 10 is a flowchart illustrating a business process processing method according to the business process definition diagram shown in fig. 7 according to an embodiment of the present application;
fig. 11 is a schematic flowchart of another business process processing method provided in this embodiment for the business process definition diagram shown in fig. 7;
fig. 12 is a schematic diagram of a pressure measurement result of a business process processing method provided in the embodiment of the present application;
FIG. 13 is a schematic diagram of a pressure measurement result of a conventional business process processing method;
fig. 14 is a business process diagram obtained by a business process processing method according to an embodiment of the present application;
fig. 15 is a schematic diagram of a request time-consuming test result of a business process processing method according to an embodiment of the present application;
fig. 16 is a schematic structural diagram of a business process processing apparatus according to an embodiment of the present application;
fig. 17 is a schematic structural diagram of another business process processing apparatus according to an embodiment of the present application;
fig. 18 is a schematic hardware structure diagram of a computer device according to an embodiment of the present application.
Detailed Description
In combination with the analysis of the above background art, in practical application, in order to ensure data consistency, the existing workflow engine has large database bearing pressure and poor concurrency performance, and is not suitable for a multi-instance high-concurrency interactive service scene. In view of the above, the inventor of the present application desires to provide a new high-performance workflow engine, which is capable of supporting common sequential processes, branch processes, exclusive-or processes, and countersign processes, and also supporting operations of dynamically adding discussion types to be handled and countersign types to be handled in the process, and simultaneously satisfying respective requirements of the processes.
Specifically, the method and the device can utilize the single-thread atomic operation characteristics of the Redis database, and realize high concurrency of the process instance interaction data and guarantee data consistency on the basis of the memory and the Redis database. The method specifically includes storing finished process instance data to a persistent database (such as MySql, SQL SERVER) and storing unfinished process instance data to a Redis database (namely a key value database), so that parallel work of unfinished process instances is guaranteed by using high concurrency performance of Redis database interfaces, meanwhile, the finished process instance data cannot occupy the Redis database, occupation of memory space of the Redis database is reduced, the condition that service is unavailable due to insufficient memory is avoided, access times of the Redis database are reduced, and usability of writing service is improved.
In addition, aiming at the problems that the existing workflow engine is difficult to trace and position and the circulation is not clear because the flow circulation is embedded into the business flow processing process, the inventor of the Application also provides the method for separating the flow circulation from the business logic processing, so that the business system can realize the business flow instance circulation by calling a related Application Program Interface (API), the strong coupling relation between the business system data processing and the business flow circulation is solved, the business flow processing is clearer, and each business system only needs to pay attention to the business logic processing of each link.
Based on the improvement, the new workflow engine for realizing the business process processing method can adopt a JSON format, so that the readability and the portability of the workflow engine are better, and meanwhile, a synchronous interaction mode is adopted to replace the existing asynchronous processing mode, so that the abnormal information is returned more timely, and the abnormal node is timely and accurately positioned; further, the flow can be graphically defined by using SVG (Scalable Vector Graphics) to make the instance state tracing more intuitive.
The application can adopt the HTTP communication protocol to interact with each service system, thereby reducing the learning cost and facilitating the access of new processes.
The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
Referring to fig. 2, in order to implement the system architecture schematic diagram of the business process processing method provided by the present application, the system may include a web server 11, an application server 12, a first database 13, and a second database 14, where:
the web page servers 11 are deployed in the web layer, and are mainly used to deploy process definition and site observation, introduce web load, and support server horizontal expansion, etc., and the number of the web page servers 11 deployed in the web layer is not limited in the present application.
Referring to an architecture diagram formed by system modules shown in fig. 3, the process definition and observation sites are mainly used for making standard process definition specifications and providing an SVG graphical process definition (JSON protocol storage) tool, a process instance data operation maintenance tool and the like, so that user process definition and instance follow-up are more convenient, faster and more intuitive. Specific uses of the tools mentioned in connection with the present embodiment may be found in the description of the corresponding parts of the method embodiments below.
The application server 12 may be a service device providing a certain service, and in practical applications, a client matched with the application server 12, that is, an application program implementing the service, is usually set on a user terminal side. It can be seen that, for different services supported by the system, corresponding application servers are usually deployed, and the application does not limit the type and number of the application servers.
In this application, the application server 12 is used as an intermediate layer of the entire system, and in combination with the architecture diagram formed by the system modules shown in fig. 3, the application server 12 may be mainly used to provide functions such as API service, logic processing, data adaptation, and introducing a load layer to the outside, and also support the server lateral expansion and provide some basic services.
As shown in fig. 3, the number of API interfaces provided by the workflow engine to the outside is often multiple, and specifically, a corresponding callable message interface may be configured for different application scenarios, such as creating an instance, terminating a flow, submitting a to-do, pull discussion, forwarding a flow, refusing a flow, a to-do center, and an instance center, so that the service system initiates a service request message to the workflow engine by calling a required message interface, which is not limited in the specific implementation process of the present application. Therefore, the workflow engine provided by the application can externally issue an interface list consisting of a plurality of API interfaces and functions thereof, so that a developer can select a required API interface according to needs, and access of a service system is simpler through an HTTP API interaction mode.
The logic processing to be realized by the application server can be mainly used for realizing the state updating operation of all process instances, and is a logic processing core component of the whole workflow engine. In this embodiment, in order to meet different logic processing requirements of a service, referring to fig. 3, several parts such as an access security control center, a user socket center, an instance memory data context, a Redis instance context, and a process instance control center may be deployed, but not limited thereto, and regarding a working process of each part, reference may be made to the description of a corresponding part of the method embodiment below, and this embodiment is not described in detail here.
For an adaptation layer for implementing data adaptation of an application server, the adaptation layer can be divided into a DBAdapter adaptation layer and a RedisAdapter adaptation layer according to the service field, and is mainly used for implementing dynamic query and storage of process data, and the specific implementation process can refer to the description of the corresponding part of the method embodiment below
In addition, for some basic services implemented by the application server, such as the services of the organization structure information, the report chain information, the personnel basic information, the messages such as the mail/short message/WeChat, and the log, which can be provided by the basic service center in fig. 3, the service content included in the basic service center and the implementation method thereof can be determined according to actual needs, and are not limited to the service content shown in fig. 3.
The first database 13 and the second database 14 may both be database servers, but they may both be different database servers, in this application, the first database 13 is mainly used to store unfinished business process instance data, and may be a Redis database. The second database 14 may be used to backup all process instance data, store all completed process instance data, and may be a MySQL database server, but is not limited to such a relational database.
Therefore, during the operation of the service, all the corresponding process instance data can be stored in the first database 13, and when the service is finished, all the process instance data of the service stored at this time can be synchronized to the second database 14, and then the process instance data of the service stored in the first database 13 is cleared, so that the occupation of the memory space of the first database is reduced, and the normal use of other service is ensured.
Based on the classified storage mode of the process instance data provided by the application, the finished process instance data can be directly read from the second database 14, so that the number of access requests to the first database is reduced, and the usability of the writing service is improved. Therefore, the data storage method and the data storage device realize the scalability of the data storage of the first database through the classified storage mode, and provide a high-concurrency interactive interface to the outside while ensuring the data consistency.
Optionally, as shown in fig. 2, the first database and the second database may both adopt a master-slave mode to store the data of the business process instance, and a specific implementation process of the storage is not described in detail in this application, and is not limited to the two sets of master-slave databases shown in fig. 2.
In addition, it should be noted that, for the system structure of the three-layer network deployment shown in fig. 2, the system structure is only an example of a system architecture for implementing the service flow processing method provided by the present application, and is not limited to the structure shown in fig. 2, nor to two groups of servers provided by each layer, and the number of servers in each layer may be flexibly adjusted according to actual development requirements, and details are not listed in this application.
In conjunction with the system structures shown in fig. 2 and 3, in order to more clearly understand the working process of the workflow engine in a specific application scenario, the following is to be described in the entirety of a business processing flow, referring to a signaling flow diagram of an embodiment of a business processing method provided by the present application shown in fig. 4, the method may be applied to a service side, as shown in fig. 4, and the method may include, but is not limited to, the following steps:
step S101, a client side initiates a certificate acquisition request to an authentication system of an application server;
in this embodiment, the authentication system may be a user credential issuance center deployed in a logic processing layer of the application server, and may be configured to issue a credential token indicating an identity of a user for the user, and also be an access credential that ensures that the user can access an application in the system. In this way, the user client can carry the credential to access other applications, so that the other applications can identify the user identity based on the credential to determine whether the user is allowed to access the application.
The credential obtaining request may generally include login information input by a user, so that the user can verify the identity of the user accordingly, and the content included in the login information is not limited in the present application, such as a user identity ID, a user image, and the like, and may be determined according to the requirement of the authentication system for user identity authentication.
Step S102, the authentication system responds to the certificate acquisition request and distributes corresponding identity certificates for the client users;
step S103, the authentication system feeds the identity certificate back to the client;
after the above analysis, the authentication system receives the credential acquisition request, may verify the login information contained therein, and determine that the user who initiated the credential acquisition request through the client has access right, that is, the login information is qualified, may assign a unique identity credential token to the user, and feed back the unique identity credential token to the client.
Optionally, the identity credential ticket may include information such as SHA1 (Secure Hash Algorithm ) signature Algorithm TOKEN (temporary TOKEN), AES key for symmetric encryption and decryption of messages, and credential expiration time.
Step S104, the client generates a service request message by using the identity information;
in this embodiment, when a user needs to obtain a certain type of message of a certain business process instance, after the user obtains an identity credential of the user from an authentication system using a client, a header field of the request, such as a unique user identifier, a SHA1 signature of a request message, and the like, may be customized according to the identity credential. And then, the AES algorithm key can be symmetrically encrypted and decrypted by using the message in the identity certificate to encrypt the request message, and the encrypted request message is used to generate the service request message, so that the security of the service request sent to the outside is ensured, and the specific encryption process is not described in detail in the application.
Optionally, in different application scenarios, the message type of the service request message generated in this embodiment may include: initiating a message, submitting a pending message, terminating a flow message, forwarding a message, signing a pending message, dismissing a message, etc., based on different types of messages, a service request of a corresponding type will be generated, but is not limited to the types listed herein.
In combination with the system architecture shown in fig. 2, the application may display a list of callable message interfaces provided by the workflow engine in a display interface on the client side, and a user may select a message interface required by the current service flow initiation, so as to generate a corresponding service request message by calling the message interface and send the message request message to the workflow engine. If the user needs to initiate a recruitment process, a message interface of the message type of the initiating message can be selected to realize the calling of the message interface and inform the workflow engine of the current business requirement of the user, and the specific implementation process is not limited by the application.
It can be seen that, in the service system of this embodiment, a mode of calling a message interface provided by the workflow engine is adopted to initiate a corresponding service request message, so that the workflow engine executes a corresponding service logic based on the service request message without paying attention to the service itself, and the workflow engine provided by the present application separates the service flow from the service logic, so that the service flow processing is clearer, and it is convenient to trace and locate an abnormal node in the service flow instance.
Moreover, the application adopts an HTTP API interactive mode, simplifies the access of a service system and reduces the learning cost.
Step S105, the client sends the service request message to a workflow engine;
step S106, the workflow engine acquires the identity certificate of the user from the authentication system;
step S107, the workflow engine uses the identity certificate to check the validity of the service request message;
step S108, the workflow engine acquires the message content of the service request message according to the identity voucher under the condition that the verification is passed;
in this embodiment, after receiving a service request message sent by a user, the workflow engine may first verify validity of the service request message, that is, whether the user initiating the service request message has a legal right, so that the workflow engine may obtain an identity credential of the user from the authentication system, and further perform validity check on the service request message by using the identity credential, and a specific checking process is not described in detail in this embodiment.
After the verification, the message content of the service request message is decrypted by using the key in the obtained identity credential to obtain the message content requested by the user, where the decryption process corresponds to the encryption process, and details of this embodiment are not described in detail.
Step S109, the workflow engine acquires the business process instance matched with the message content, and stores the instance data of the business process instance to a Redis database;
step S110, the workflow engine processes the business process instance and updates the instance state of the corresponding node;
step S111, the workflow engine feeds back the updated instance state and the instance data of the business process instance to the client.
It should be understood that, for a service requested to be operated by a user, a service flow instance is not finished, and therefore, after the service flow instance matched with the message content of the service request message initiated by the user is obtained, that is, after the workflow engine responds to the service request message and obtains the service flow instance requested this time, the instance data of the service flow instance may be cached in a Redis database (that is, the first database in the system architecture), and a specific storage manner is not limited.
Optionally, for a key-value database of the type of the Redis database, the instance ID (that is, the unique identifier of the service flow instance) of each service flow instance may be used as a key, and the corresponding instance data (that may be data in the JSON format) may be used as the value of the key, so as to store the instance data of different service flow instances. Therefore, when the instance data of a certain business process instance needs to be acquired, the corresponding instance data can be obtained by directly utilizing the instance ID of the business process instance.
The service flow example of the present application is generally generated for a service flow definition diagram provided by a user, and the service flow definition diagram may adopt JSON protocol definition, and configure corresponding flow definition IDs for different service flow definitions to distinguish the service flow definition diagrams, and the process of the service flow definition is not described in detail in the present application. In practical application of the application, for different types of service request messages, the processes of the workflow engine for acquiring corresponding service process instances may be different. Assuming that the service request message is an initiation message, such as initiating a recruitment process, the workflow engine may initialize a context class of a service process instance according to a service process definition map to obtain a recruitment process instance, and at the same time, the application may cache instance data of the initiated recruitment process instance in a Redis database.
Then, in the execution process of the recruitment process, the instance data of the recruitment process can be read from the Redis database and applied to the specific recruitment process, namely, the instance data is utilized to execute the service corresponding to each node in the recruitment process instance and update the instance state of the corresponding node, so that a recruiter can timely know the latest state of each node in the recruitment process and know what node is currently in the recruitment process, and meanwhile, the Redis database can be ensured to store the state of each recruited person in the recruitment process.
Optionally, as for the service request message initiated by the user, as described above, the service request message may also be other types of messages, such as various types of service request messages listed above in the service flow execution process, in this case, the workflow engine may instantiate the instance context class and the Redis flow instance class according to the instance ID of the service flow instance of the service, to obtain the current processing node in the service flow, and then, through interface polymorphic mechanisms, update the service flow instance data and update the instance states of the nodes are implemented by using different types of nodes, encrypt the updated instance data, and feed back the encrypted instance data and the updated instance states to the client, so as to meet the service requirement of the service request message initiated by the user.
During the operation of the business process instance, the generated updated instance data may be cached in a Redis database (i.e., the first database in the system structure), so as to implement persistent storage of data generated during the execution of the business process instance. And under the condition that the updated instance state indicates that the business process instance is finished, the method and the system can synchronize the instance data of the business process instance in the Redis database to the MySQL database (namely, the second database in the system structure) for storage, delete the instance data cached in the Redis database, and reserve a storage space to continue caching the instance data of other business process instances.
Based on the description of the foregoing embodiment, with reference to the system architectures shown in fig. 2 and fig. 3, and with reference to fig. 5, a flow diagram of another business process processing method provided in this embodiment of the present application is shown, where the method may include:
step S201, detecting the instance state of each node in the business process instance;
step S202, according to the instance state, verifying whether the business process instance is finished, if not, entering step S203; if yes, go to step S204;
step S203, caching the instance data of the business process instance to a first database;
as described above, the first database may be a Redis database, and the service process instance is not completed, which indicates that the service corresponding to the at least one node of the service process instance is not completed, that is, there is a service corresponding to the at least one node to be executed.
Optionally, the Redis database of the present application may employ data structures such as Hash, lish, string, and the like to implement storage of instance data, and ensure consistency of data while improving performance through transactions. Since the Redis database belongs to the key value database, when the relevant data of the business process, such as business process definition information, instance data of each business process instance, and the like, is stored, the corresponding unique identifier can be set as a key, and the corresponding data is stored as a value.
Specifically, a process definition ID, an instance ID, a to-do ID, and the like may be used to generate a global unique master key in a step size specifying manner, that is, a unique identifier used to query corresponding data, but the present invention is not limited to this identifier generation manner, and taking a Hash data structure as an example, data stored in the Redis database may mainly include Hash values of storage objects such as a service process definition, a service process instance, an instance node, a to-do task, a handler to-do, and a personal to-do center, and the storage objects may have a storage relationship diagram as shown in fig. 7.
Step S203, synchronize the instance data of the business process instance cached in the first database to the second database, and delete the instance data of the business process instance cached in the first database.
The second database may be, but is not limited to, a Mysql database.
In summary, in this embodiment, the first database may be used to cache the example data of the unfinished business process example, the second database may be used to store the example data of the finished business process example, and the example data of the finished process in the first database is synchronized to the second database at a fixed time, so that dynamic expansion and contraction of the memory space are achieved, and it is avoided that all the example data are cached in the first database, occupy the storage space of the first database, and affect the normal work of other business process examples; meanwhile, for the finished business process example, when the example data of the business process example needs to be consulted, the second database is directly read, so that the access times of the first database are reduced, and the usability of writing service is improved; moreover, due to the high concurrency processing capacity of the first database, the method and the system achieve high concurrency of the example data of the business process example and guarantee data consistency.
The service processing method described in the above embodiment of the present application will be described below with reference to a specific application scenario, and with reference to the service flow definition diagram shown in fig. 7, the service flow diagram may be a flow definition determined by a user according to actual needs, but in actual application, the service flow definition diagram is not limited to the service flow definition diagram shown in fig. 7, and is only described as an example. Assuming that the current processing node of the user in the application scenario is "approval 1", the current handler is a, and the correspondingly executed service processing method is as shown in fig. 8:
step S301, a client submits a message to be handled to a workflow engine;
it should be understood that step S301 may actually be a to-do message submitted by a handler of the current processing node through a client, and this embodiment only takes the message type of the to-do message as an example for description, and the service processing method for other types of service request messages is similar, and the detailed description is not given here.
Step S302, the workflow engine instantiates a process instance context class and a process instance class according to the process instance identifier in the to-do message;
in this embodiment, for different service process instances, a unique process instance identifier is generally configured, such as the instance ID of the service process instance described above, the configuration method of the instance ID is not limited in the present application, and may be generated in a Redis self-increment key manner, or generated by using a SnowFlake algorithm (i.e., a distributed ID generation algorithm), and the like.
Referring to the service flow control class diagram shown in fig. 9, the application may instantiate a flow instance context class (FlowInstanceContext class) and a service flow instance class (FlowRedisContext class) using the instance ID in the service request message, such as the submit-to-do message SubmitMsg described above.
For validity check and message content encryption/decryption processes after the workflow engine receives the message, reference may be made to the description of the above embodiments, which is not repeated herein.
Step S303, the workflow engine determines that the current processing node is a node of the type to be handled, and acquires the type of the node of the type to be handled;
step S304, the workflow engine instantiates the type node class to be handled and stores the examination and approval result state of the current processing node;
when the current processing node is determined to be "examination and approval 1", which is a to-be-handled type node and corresponds to a tassknode class (i.e., a to-be-handled type node class), the workflow engine may instantiate the tassknode class and Execute an Execute method to store the examination and approval result state of the current processing node, and a specific implementation process of this embodiment is not described in detail. Step S305, the workflow engine acquires a next-level execution node;
the present embodiment may acquire the next-stage execution node of the current processing node, such as "branch 1" in fig. 7, by executing the ChangeFlowState () method.
Step S306, the workflow engine determines that the next-level execution node is a branch type node, returns to step S305 to continue executing until the obtained next-level execution node is a to-do type node, and updates the next-level execution node of the to-do type to a current processing node;
in the present application, the gateway type node in the service flow instance may include a branch type node ForkNode, a countersign type node JoinNode, and an exclusive or type node XorNode, where the exclusive or type node, i.e., the exclusive or node, generally adopts an automatic submission policy in the service flow circulation process, i.e., defaults to a processed state, and the processing result is that the examination and approval is passed. In addition to the above, the instance nodes in the service flow instance may further include a start node StartNode, a task node taskdode, an end node EndNode, and the like, and specifically, refer to each type of node in the service flow definition diagram shown in fig. 7.
In the service flow example obtained from the service flow definition diagram shown in fig. 7, it is determined that the current processing node is "approval 1", the next-stage execution node is "branch 1", the next-stage execution node is a gateway type node, specifically a branch type node, and the ChangeFlowState () method can be automatically triggered, the next-stage execution node is obtained, the flow is switched to "branch 2", and similarly, the flow is continuously and automatically switched to the next-stage execution node "approval 2" and "approval 3", that is, the to-be-handled type node; in the case that the current processing node is updated to be "branch 1", the next-stage execution node to which the automatic flow is forwarded may also be "branch 3", at which time the automatic flow is also forwarded to its next-stage execution nodes "approval 4" and "approval 5", i.e. to-do type nodes.
Step S307, the workflow engine updates the instance state of the current processing node;
step S308, the workflow engine feeds back the updated instance state of the node in the process instance to the client.
Specifically, in this embodiment, when it is determined that the current processing node is a to-do type node, the ChangeFlowState () may be called to instantiate a node to-do state, that is, to update an instance state, the nodes of the currently executed service flow instance are determined as approval 2, approval 3, approval 4, and approval 5, and meanwhile, a handler corresponding to each to-do node may also be obtained, and then, the updated instance state is fed back to the client, and the state of each node in the service flow displayed by the client is updated to remind the handler of each to-do node to perform processing in time.
In addition, with reference to the description of the foregoing embodiment, in the service flow execution process, for example data that is not completed in a service flow example, after being updated, the example data may be continuously cached in the Redis database, a service corresponding to a node is executed according to a sequence of the service flow (that is, a node execution sequence in the service flow example), and simultaneously, according to a type of the executed node, an example state and example data of the node are updated, and if it is determined that the service flow is completed, the example data of the service flow example cached in the Redis database may be synchronized to the MySQL database for storage, and the example data cached in the Redis database is deleted. The execution sequence of the service corresponding to each node in the business process is determined according to a business process definition diagram predefined by a user, and is not limited to the business process definition diagram shown in fig. 7.
Based on the above concept, in the practical application of the present application, for defined service flows under different scenarios, corresponding instance node processing policies (which may be a data processing algorithm) may be adopted to obtain instance data of the service flow, where the instance node processing policies may include one or a combination of more of a closest branch node matching policy CBNM, a DFSP (Depth First Search Path) -based inter-node Path traversal policy, a rejectional policy, an XOR circulation policy, a terminable policy, a Join meeting and circulation policy, and the like, and may be selected according to requirements of the practical flow.
In practical application, after a current processing node of a business process instance is executed, whether the current processing node can be forwarded to a next-level execution node or not is determined, as shown in fig. 7, if the current processing node is a "countersign 2" node, whether the current processing node can be forwarded to an "approval 6" node or not is determined, whether all nodes in all paths before the "countersign 2" node and a branch node matched with the "countersign 2" node are executed or not is checked, if yes, the current processing node is forwarded to the "approval 6" node, and otherwise, service corresponding to the node which is not executed yet needs to be waited for execution. Therefore, in the process of executing the service flow instance, it is often necessary to find the nearest branch node of the current processing node (here, the countersign node or the xor node), and all paths between the current processing node and the nearest branch node, and the like.
Based on this, the present application will describe how to find the nearest branch node in combination with the service flow definition diagram shown in fig. 7, specifically, to "return to branch node" branch 2 "when the service flow is transferred to" will sign 1 "node; when the business flow goes to the "countersign 2" node, the return branch node is "branch 1", instead of returning to the application scenario of branch 2 or branch 3", for example.
Since the branch nodes and the signing nodes, the branch nodes and the exclusive or nodes are necessarily specified to be paired when the flow is defined, in a scenario where a closest branch node matching with the signing node or the exclusive or node needs to be determined, the embodiment may use a CBNM policy for implementation, and a specific implementation process may refer to a flow diagram shown in fig. 10, for this application scenario, the method is not limited to the service flow processing method provided in the embodiment, as shown in fig. 10, this embodiment mainly describes an implementation process for implementing the application scenario, and a node matching policy CBNM is used to obtain a closest branch node of a current processing node, which specifically implements the following processes:
step S401, inserting the current processing node of the business process instance into a stack;
the stack is a linear table with limited operation, and the insertion and deletion operation is limited only at one end (namely the top of the stack) of the table, and step S401 is actually the current processing node stacking step. In this embodiment, as described in the above problem, the current processing node may be a node of a user-defined type of a meeting or an exclusive or in the service flow example shown in fig. 1, that is, a "meeting 1" node and a "meeting 2" node.
Step S402, acquiring a preposed node set of the current processing node according to the business process example;
in this embodiment, the front node set may be a gateway type node in the service flow instance, such as a branch node, a countersign node, an exclusive-or node, and the like in fig. 7.
Step S403, selecting any preposed node in the preposed node set as an undetermined node;
step S404, detecting whether the node to be determined is a branch type node, if yes, entering step S405; if not, executing step S407; step S405, deleting the current processing node from the stack;
step S406, detecting whether the current stack is an empty stack, if not, returning to the step S402; if yes, returning to the node to be determined;
step S407, detecting whether the node to be determined is a countersign or an exclusive OR type node, if not, returning to the step S402; if yes, go to step S408;
step S408, the pending node is inserted into the stack as the current processing node, and the process returns to step S402.
It can be seen that, in this embodiment, if the current processing node is the "countersign 1" node, and the front node matched with the current processing node is the branch type node, according to the above steps described in this embodiment, the branch node is returned to execute; if the current processing node is "countersign 2", its front node has countersign node and xor node, that is, the front node that flows to "countersign 2" node has "countersign 1" node and "xor 1" node, according to the above-mentioned manner, it can be determined that the nearest branch node that these two nodes match respectively is "branch 2" node and "branch 3" node, and further it can be determined that the nearest branch node that "countersign 2" node matches should be "branch 1" node, so when the traffic flow flows to "countersign 2" node, it can return to its nearest branch node, that is, return to "branch 1" node, so as to query all paths between "branch 1" node and "countersign 2" node.
It should be noted that, the implementation method of the respective nearest branch nodes of the countersign node and the exclusive-or node in the query business process example is not limited to the method described in this embodiment.
Describing an application scenario in combination with the above embodiment, the present application may use a traversal policy of an inter-node path based on a DFSP algorithm to obtain a path between a countersign node (or an exclusive-or node) and a nearest branch node (i.e., a matched branch node), which is still described by taking the service flow definition diagram shown in fig. 7 as an example, and referring to the flow diagram shown in fig. 11, a process of obtaining a path between nodes of different types may include, but is not limited to, the following steps:
step S501, obtaining service flow description information, determining a branch node in the service flow description information, and inserting the branch node into a stack;
step S502, acquiring a node set which is not accessed after the stack top node of the current stack;
if the node behind the stack top node is the node which is not accessed, the node indicates that at least one path which is not inquired exists between the branch node and the meeting node; otherwise, all the paths between the two nodes are described.
Step S503, detecting whether the next-level execution node of the branch node is the countersign node in the service flow description information, if not, entering step S504; if so; entering step S505;
step S504, insert the next level executive node into the stack, return to step S501, until the stack is empty;
step S505, a path between the branch node and the countersign node is obtained, and the obtained path is output.
Taking the example of obtaining all paths between the "branch 1" node and the "countersign 2" node, pushing the branch 1 node, if a node set which is not accessed exists behind the top node of the stack, detecting whether a next-stage execution node of the branch 1 node is the countersign 2 node, if so, obtaining the path from the branch 1 node to the countersign 2 node, if not, pushing the next-stage execution node, continuing to detect whether an unvisited node set exists behind the top node of the stack according to the above mode, and determining whether a next-stage execution node thereof is the "countersign 2" node, until obtaining all paths between the branch 1 node and the countersign 2 node, namely, the current processing node is the next-stage execution node which is unvisited.
It should be noted that, regarding the path query method between different types of nodes, the method is not limited to the above method described in this embodiment, and other depth optimization algorithms may also be used, and details of this application are not described in detail.
In the service flow, a flow condition from the currently processed node to a certain previous executing node is usually encountered, and at this time, a reject policy needs to be executed to implement the service flow, specifically, taking an application scenario of "the currently pending node rejects to any previous processed node" as an example, referring to the service flow definition diagram shown in fig. 7, the node "approval 6" may be rejected to the node "approval 2" from the node. To implement the application scenario content, the implementation process using the rejection policy may be:
acquiring a next-level execution node of a rejoin node of the service flow description information, detecting whether the next-level execution node is a termination node or an end node, and if so, ending the flow; if not, adding the instantiated next-stage execution node to the list to be adjusted, taking the instantiated next-stage execution node as the current processing node, continuously detecting the next-stage execution node until the current processing node is a termination node or an end node, finally adjusting the state of each node in the list to be adjusted to be a rejoined state, and adjusting the state of the rejoined node to be a processing state.
It can be seen that, when the execution of the rollback strategy is performed in this embodiment, the instance states of all nodes between the approval 6 node and the approval 2 node may be updated to the rollback state, the processing state of the approval 2 node is adjusted to "in process", and the handler in the approval 2 link is regenerated to be handled.
For the Xor node in the service flow instance, the Xor node may be executed according to an Xor circulation policy, for example, when an Xor type node is encountered, all unprocessed to-be-handled branches in a source branch using the current Xor node as a final node are cleaned; for example, if the process is executed to the service process description information of the node "xor 1", the xor circulation policy used in the present application is explained, that is, the policy content for implementing the xor flow in the service process may include:
taking the current exclusive-or node as an end node for explanation, using the closest branch node matching strategy described in the above embodiment, searching the closest matching branch node of the exclusive-or node forward, using the found closest matching branch node as a start node, and using the exclusive-or node as an end node, querying all paths between the two nodes according to the inter-node path traversal strategy given above, forcibly terminating the pending branch nodes that are not processed by each branch node, adjusting the node state, obtaining the next-stage execution node, and automatically circulating, where the processing state of the node in the service flow example may generally include four states of initial value, in-process, processed, and rejected.
For a node whose processing result is failed or rejected in the process of executing the service flow instance, the application generally executes the terminable policy map 7, and in this case, it may be automatically determined whether the entire service flow instance is terminable, for example, in the service flow definition map shown in fig. 7, a node "audit 4" is failed, and for this application scenario, the following method may be implemented:
firstly, all paths between a start node and a stop node can be obtained by utilizing the inter-node path traversal strategy described above, secondly, whether an un-stop path exists in the obtained path or not is judged, namely, the service of a certain node in the path is not finished or not executed, if the un-stop path exists, the process can not be stopped, the current processing node can be maintained, and the service corresponding to each node in the un-stop path is waited to be finished; if not, the instance state of the termination node is updated, and the process ends.
With reference to the description of the foregoing embodiments, for a countersign type node in a service flow instance, a Join countersign circulation policy may be executed to determine circulation of the countersign type node, specifically, when the countersign type node is encountered, it may be automatically determined whether the service flow instance may be circulated to a next step, and still taking the service flow definition diagram shown in fig. 7 as an example, the service flow definition diagram is executed to a "countersign 2" node, and whether the service flow instance may be circulated to an approval 6, where a specific processing process is as follows:
the method comprises the steps of obtaining a branch node (branch 1) which is matched with a meeting node (meeting 2) most recently, inquiring all paths between the branch node and the meeting node, obtaining a node which passes the previous level approval of the meeting node, and filtering paths containing the node which passes the previous level approval in all the inquired paths; obtaining un-instantiated nodes in next-level nodes (such as approval 2 and approval 3) of the branch nodes, filtering paths containing the un-instantiated nodes from the rest paths, judging whether the last execution branch node in the last rest paths passes through or not, if so, automatically circulating the next-level nodes of the countersigning nodes, and if not, waiting at the countersigning nodes until the last execution branch node passes through.
It should be noted that, in each branch processing procedure of the business process, the corresponding policy implementation as exemplified above may be adopted, but the policy implementation is not limited to the policy described herein, and the policy implementation may be flexibly adjusted according to actual needs.
In summary, in the process of executing the service flow, under the condition that the current processing node is determined to be the countersign node or the xor node, the closest matching branch node of the current processing node may be determined by using the closest branch node matching policy, and after traversing the paths between the closest matching branch node and the current processing node, determining that an unterminated path exists in the paths, and maintaining the unterminated path at the current processing node to execute the service corresponding to each node in the unterminated path; if the paths do not have the paths which are not terminated, the instance state of the current processing node is updated, and the flow is transferred to the next-stage execution node.
Based on the above-described embodiment, the new workflow engine-implemented business process processing method provided in this application obtains the test result shown in fig. 12 by performing pressure test on a process interface on a computer device configured with a Windows i7 core 8 thread 8GB memory and a Redis Windows version 2.8, where in fig. 12, 1000/20 may represent: a total of 1000 requests are initiated, 20 for each concurrent request, and so on for subsequent data on the abscissa. It can be seen that the concurrency performance of the workflow engine of the present application is 900-1000/QPS (Query Per Second, query rate Per Second), and compared with K2, the performance of Activiti shown in fig. 13 is significantly improved.
In addition, from the view of process tracing performance, referring to a certain business process example diagram in the production environment shown in fig. 14, the application adopts graphical description of the business process, so that the example state display of the business process example is more intuitive. Furthermore, the display state of the nodes in different states can be adjusted, for example, the background color of the current processing node is changed, so that the current processing node is more obvious compared with other nodes, for example, the executed node is displayed as green; unexecuted nodes are represented in blue, and the like.
In addition, from the view of the time consumption of the interface request of the production environment, in the production environment as described above, part of the latest access logs are extracted, and the statistical result of the time consumption of the request as shown in fig. 15 can be obtained, as can be seen from fig. 15, the overall performance of each interface of the workflow engine of the present application is stable, and the average response time is mostly maintained at about 10 milliseconds.
Referring to fig. 16, a schematic structural diagram of a business process processing apparatus provided in an embodiment of the present application is shown, where the apparatus may include:
the detection module 21 is configured to detect an instance state of each node in a service flow instance;
the first storage module 22 is configured to, when the instance status indicates that the business process instance is not ended, cache instance data of the business process instance in a first database;
a second storage module 23, configured to synchronize, when the instance status indicates that the business process instance is ended, the instance data of the business process instance cached in the first database to a second database, and delete the instance data of the business process instance cached in the first database.
Optionally, as shown in fig. 17, the apparatus may further include:
a message obtaining module 24, configured to obtain a service request message, where the service request message is generated by calling a corresponding message interface;
a message response module 25, configured to respond to the service request message to obtain a service process instance of the request;
an example node processing module 26, configured to execute, by using the example data of the business process example, a service corresponding to each node in the business process example, and update an example state of the corresponding node;
and a data transmission module 27, configured to feed back the instance data and the updated instance state of the node to the client.
Optionally, the message response module 25 may include:
an initialization unit, configured to initialize an instance context class according to the type of the service request message, so as to obtain a corresponding service flow instance;
accordingly, the instance node processing module 26 may include:
the first determining unit is used for determining the current processing node of the business process instance;
the first updating unit is used for updating the instance state and the instance data of the current processing node according to the type of the current processing node;
and the second updating unit is used for updating the instance state and the instance data of the corresponding next-level execution node according to the execution sequence of the nodes in the business process instance and the type of the next-level execution node of the current processing node.
Optionally, the apparatus may further include:
the information acquisition module is used for acquiring the service flow description information;
a business process implementation module, configured to obtain a business process matching the business process description information by using at least one instance node processing policy, where the business process includes flows between different nodes of the business process instance;
wherein the instance node processing policy comprises a combination of one or more of a recent branch node matching policy, an inter-node path traversal policy, a rejections policy, an exclusive-or-flow-around policy, a terminable policy, and a countersign-around policy.
In this embodiment, the service process implementation module may include:
the system comprises a nearest matching branch node determining unit, a judging unit and a judging unit, wherein the nearest matching branch node determining unit is used for determining a nearest matching branch node of a current processing node by using a nearest branch node matching strategy under the condition that the current processing node of a service flow example is a countersign node or an exclusive-or node;
a path obtaining unit, configured to traverse a path between the most recently matched branch node and the current processing node;
a first processing unit, configured to, if an unterminated path exists in the path, maintain the unterminated path at the current processing node, and execute a service corresponding to each node in the unterminated path;
and the third updating unit is used for updating the instance state of the current processing node if the path does not have the unterminated path.
Furthermore, the apparatus may further include:
and the data interaction module is used for realizing data interaction with the service system by adopting a synchronous interaction mode.
Regarding the process of implementing the corresponding function by each virtual module, reference may be made to the description of the method embodiment described above, and this embodiment is not described in detail.
The embodiment of the present application further provides a storage medium, on which a computer program is stored, where the computer program is executed by a processor, and the steps of the business process processing method are implemented.
An embodiment of the present application further provides a computer device, where the computer device may be the above-mentioned application server, and as shown in fig. 18, a hardware structure of the computer device may include: a communication interface 31, a memory 32, and a processor 33;
in the embodiment of the present application, the communication interface 31, the memory 32, and the processor 33 may implement communication with each other through a communication bus, and the number of the communication interface 31, the memory 32, the processor 33, and the communication bus may be at least one.
Alternatively, the communication interface 31 may be an interface of a communication module, such as an interface of a GSM module;
the processor 33 may be a central processing unit CPU, or an Application Specific Integrated Circuit ASIC (Application Specific Integrated Circuit), or one or more Integrated circuits configured to implement embodiments of the present Application.
The memory 32 may comprise high-speed RAM memory and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The memory 2 stores a computer program, and the processor 33 calls the computer program stored in the memory 32 to implement the steps of the business process processing method applied to the computer device, where the specific implementation process may refer to the description of the corresponding parts of the above method embodiments.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device and the computer equipment disclosed by the embodiment correspond to the method disclosed by the embodiment, so that the description is relatively simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A business process processing method, the method comprising:
detecting the instance state of each node in the business process instance;
under the condition that the instance state indicates that the business process instance is not finished, caching instance data of the business process instance to a first database, wherein the first database is a key value database, and ensuring parallel work of the unfinished business process instance by utilizing the single-thread atomic operation characteristic of the key value database and the high concurrency performance of an interface of the key value database;
and under the condition that the instance state indicates that the business process instance is finished, synchronizing the instance data of the business process instance cached in the first database to a second database, and deleting the instance data of the business process instance cached in the first database so as to reduce the occupation of the memory space of the key value database, wherein the second database is a relational database.
2. The method of claim 1, further comprising:
acquiring a service request message, wherein the service request message is generated by calling a corresponding message interface;
responding the service request message to obtain a service process example of the request;
executing the service corresponding to each node in the business process example by using the example data of the business process example, and updating the example state of the corresponding node;
and feeding back the instance data and the updated instance state of the node to the client.
3. The method according to claim 2, wherein the responding to the service request message to obtain a service process instance of the current request, and using instance data of the service process instance to execute a service corresponding to each node in the service process instance and update an instance state of the corresponding node comprises:
initializing an instance context class according to the type of the service request message to obtain a corresponding service flow instance;
determining a current processing node of the business process instance;
updating the instance state and the instance data of the current processing node according to the type of the current processing node;
and updating the instance state and the instance data of the corresponding next-level execution node according to the execution sequence of the nodes in the business process instance and the type of the next-level execution node of the current processing node.
4. The method of claim 2, further comprising:
acquiring service flow description information;
obtaining a business process matched with the business process description information by utilizing at least one instance node processing strategy, wherein the business process comprises the circulation between different nodes of the business process instance;
wherein the instance node processing policy comprises a combination of one or more of a nearest branch node matching policy, an inter-node path traversal policy, a rejectional policy, an exclusive-or-migratory policy, a terminable policy, and a countersign-migratory policy.
5. The method according to claim 4, wherein the obtaining the business process matching the business process description information by using at least one instance node processing policy comprises:
under the condition that the current processing node of the business process instance is a countersigning node or an exclusive OR node, determining a nearest matching branch node of the current processing node by using a nearest branch node matching strategy;
traversing a path between the most recently matched branch node and the current processing node;
if an unterminated path exists in the path, maintaining the unterminated path at the current processing node, and executing the service corresponding to each node in the unterminated path;
and if the path does not have an unterminated path, updating the instance state of the current processing node.
6. The method of any one of claims 2~5, further comprising:
and a synchronous interaction mode is adopted to realize data interaction with the service system.
7. A business process processing apparatus, the apparatus comprising:
the detection module is used for detecting the instance state of each node in the business process instance;
the first storage module is used for caching the example data of the business process example to a first database under the condition that the example state indicates that the business process example is not finished, wherein the first database is a key value database, so that the parallel work of the unfinished business process example is ensured by utilizing the single-thread atomic operation characteristic of the key value database and the high concurrency performance of an interface of the key value database;
and the second storage module is used for synchronizing the instance data of the business process instance cached in the first database to a second database and deleting the instance data of the business process instance cached in the first database under the condition that the instance state indicates that the business process instance is ended, so as to reduce the occupation of the memory space of the key value database, and the second database is a relational database.
8. The apparatus of claim 7, further comprising:
the information acquisition module is used for acquiring the service flow description information;
a business process implementation module, configured to obtain a business process matching the business process description information by using at least one instance node processing policy, where the business process includes flows between different nodes of the business process instance;
wherein the instance node processing policy comprises a combination of one or more of a recent branch matching policy, an inter-node path traversal policy, a rejections policy, an exclusive-or-flow-around policy, a terminable policy, and a countersign-around policy.
9. A storage medium having stored thereon a computer program for execution by a processor to perform the steps of the business process handling method of any one of claims 1-6.
10. A computer device, characterized in that the computer device comprises:
a communication interface;
a memory for storing a computer program for implementing the business process handling method of any one of claims 1-6;
a processor configured to record and execute the computer program stored in the memory, wherein the computer program is configured to implement the steps of the business process processing method according to any one of claims 1 to 6.
CN201910313907.6A 2019-04-18 2019-04-18 Business process processing method and device, storage medium and computing equipment Active CN110032571B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910313907.6A CN110032571B (en) 2019-04-18 2019-04-18 Business process processing method and device, storage medium and computing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910313907.6A CN110032571B (en) 2019-04-18 2019-04-18 Business process processing method and device, storage medium and computing equipment

Publications (2)

Publication Number Publication Date
CN110032571A CN110032571A (en) 2019-07-19
CN110032571B true CN110032571B (en) 2023-04-18

Family

ID=67239109

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910313907.6A Active CN110032571B (en) 2019-04-18 2019-04-18 Business process processing method and device, storage medium and computing equipment

Country Status (1)

Country Link
CN (1) CN110032571B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110502523A (en) * 2019-08-01 2019-11-26 广东浪潮大数据研究有限公司 Business datum storage method, device, server and computer readable storage medium
CN110795437B (en) * 2019-11-04 2023-01-31 泰康保险集团股份有限公司 Service processing method, system, device and computer readable storage medium
CN111105210A (en) * 2019-12-19 2020-05-05 北京金山云网络技术有限公司 Approval task processing method and device, electronic equipment and storage medium
CN113127380A (en) * 2019-12-31 2021-07-16 华为技术有限公司 Method for deploying instances, instance management node, computing node and computing equipment
CN111241455B (en) * 2020-01-22 2023-08-25 抖音视界有限公司 Data processing apparatus, computer device, and storage medium
CN111324629B (en) * 2020-02-19 2023-08-15 望海康信(北京)科技股份公司 Service data processing method and device, electronic equipment and computer storage medium
CN111309294B (en) 2020-02-29 2022-06-07 苏州浪潮智能科技有限公司 Business processing method and device, electronic equipment and storage medium
CN113360365B (en) * 2020-03-03 2024-04-05 北京同邦卓益科技有限公司 Flow test method and flow test system
CN111652468A (en) * 2020-04-27 2020-09-11 平安医疗健康管理股份有限公司 Business process generation method and device, storage medium and computer equipment
CN111562982B (en) * 2020-04-28 2023-09-19 北京金堤科技有限公司 Method and device for processing request data, computer readable storage medium and electronic equipment
CN111651522B (en) * 2020-05-27 2023-05-19 泰康保险集团股份有限公司 Data synchronization method and device
CN112347103B (en) * 2020-11-05 2024-04-12 深圳市极致科技股份有限公司 Data synchronization method, device, electronic equipment and storage medium
CN112685499A (en) * 2020-12-30 2021-04-20 珠海格力电器股份有限公司 Method, device and equipment for synchronizing process data of work service flow
CN112785263A (en) * 2021-01-22 2021-05-11 山西青峰软件股份有限公司 Method and system for dynamically generating flow model by workflow engine
CN113282585B (en) * 2021-05-28 2023-12-29 浪潮通用软件有限公司 Report calculation method, device, equipment and medium
CN113312181A (en) * 2021-06-21 2021-08-27 浪潮云信息技术股份公司 High-concurrency workflow approval method based on activiti custom form
CN114510495B (en) * 2022-04-21 2022-07-08 北京安华金和科技有限公司 Database service data consistency processing method and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102043625A (en) * 2010-12-22 2011-05-04 中国农业银行股份有限公司 Workflow operation method and system
CN105760452A (en) * 2016-02-04 2016-07-13 深圳市嘉力达实业有限公司 Method and system for collection, processing and storage of high-concurrency mass data
CN106528898A (en) * 2017-01-04 2017-03-22 泰康保险集团股份有限公司 Method and device for converting data of non-relational database into relational database
CN107291887A (en) * 2017-06-21 2017-10-24 北京中泰合信管理顾问有限公司 LNMP frameworks realize the process management system of software implementation
US9852220B1 (en) * 2012-10-08 2017-12-26 Amazon Technologies, Inc. Distributed workflow management system
CN108228252A (en) * 2017-12-26 2018-06-29 阿里巴巴集团控股有限公司 Business processing and operation flow configuration method, device and equipment
CN109150929A (en) * 2017-06-15 2019-01-04 北京京东尚科信息技术有限公司 Data request processing method and apparatus under high concurrent scene

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521712B (en) * 2011-12-27 2015-09-23 东软集团股份有限公司 A kind of process instance data processing method and device
CN107783974B (en) * 2016-08-24 2022-04-08 阿里巴巴集团控股有限公司 Data processing system and method
CN107133309B (en) * 2017-04-28 2020-04-07 东软集团股份有限公司 Method and device for storing and querying process example, storage medium and electronic equipment
CN107220310A (en) * 2017-05-11 2017-09-29 中国联合网络通信集团有限公司 A kind of database data management system, method and device
CN108319654B (en) * 2017-12-29 2021-12-21 中国银联股份有限公司 Computing system, cold and hot data separation method and device, and computer readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102043625A (en) * 2010-12-22 2011-05-04 中国农业银行股份有限公司 Workflow operation method and system
US9852220B1 (en) * 2012-10-08 2017-12-26 Amazon Technologies, Inc. Distributed workflow management system
CN105760452A (en) * 2016-02-04 2016-07-13 深圳市嘉力达实业有限公司 Method and system for collection, processing and storage of high-concurrency mass data
CN106528898A (en) * 2017-01-04 2017-03-22 泰康保险集团股份有限公司 Method and device for converting data of non-relational database into relational database
CN109150929A (en) * 2017-06-15 2019-01-04 北京京东尚科信息技术有限公司 Data request processing method and apparatus under high concurrent scene
CN107291887A (en) * 2017-06-21 2017-10-24 北京中泰合信管理顾问有限公司 LNMP frameworks realize the process management system of software implementation
CN108228252A (en) * 2017-12-26 2018-06-29 阿里巴巴集团控股有限公司 Business processing and operation flow configuration method, device and equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Chen Wu等.A NoSQL–SQL Hybrid Organization and Management Approach for Real-Time Geospatial Data: A Case Study of Public Security Video Surveillance.《International Journal of Geo-Information》.2017,1-15. *
卢金晨.基于PaaS的电子政务服务平台框架研究.《中国优秀硕士学位论文全文数据库 (信息科技辑)》.2016,I138-747. *

Also Published As

Publication number Publication date
CN110032571A (en) 2019-07-19

Similar Documents

Publication Publication Date Title
CN110032571B (en) Business process processing method and device, storage medium and computing equipment
US11444783B2 (en) Methods and apparatuses for processing transactions based on blockchain integrated station
US11461310B2 (en) Distributed ledger technology
CN112153085B (en) Data processing method, node and block chain system
CN108683668B (en) Resource checking method, device, storage medium and equipment in content distribution network
CN103064960B (en) Data base query method and equipment
KR20210071942A (en) Transaction processing methods, devices and devices, and computer storage media
CN110289999B (en) Data processing method, system and device
US11783339B2 (en) Methods and apparatuses for transferring transaction based on blockchain integrated station
US20230316273A1 (en) Data processing method and apparatus, computer device, and storage medium
US11463553B2 (en) Methods and apparatuses for identifying to-be-filtered transaction based on blockchain integrated station
US11665234B2 (en) Methods and apparatuses for synchronizing data based on blockchain integrated station
US11336660B2 (en) Methods and apparatuses for identifying replay transaction based on blockchain integrated station
CN111400112A (en) Writing method and device of storage system of distributed cluster and readable storage medium
CN112948842A (en) Authentication method and related equipment
US20230102617A1 (en) Repeat transaction verification method, apparatus, and device, and medium
CN108073823A (en) Data processing method, apparatus and system
CN111399993A (en) Cross-chain implementation method, device, equipment and medium for associated transaction request
CN107203890B (en) Voucher data issuing method, device and system
CN115422184A (en) Data acquisition method, device, equipment and storage medium
CN115361374A (en) File transmission method and device and electronic equipment
WO2022143242A1 (en) Blockchain-based transaction distribution executing method and apparatus, server, and storage medium
Lu et al. A cache enhanced endorser design for mitigating performance degradation in hyperledger fabric
CN109818767B (en) Method and device for adjusting Redis cluster capacity and storage medium
CN112181599A (en) Model training method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant