CN114816736A - Service processing method, device, equipment and medium - Google Patents

Service processing method, device, equipment and medium Download PDF

Info

Publication number
CN114816736A
CN114816736A CN202210355433.3A CN202210355433A CN114816736A CN 114816736 A CN114816736 A CN 114816736A CN 202210355433 A CN202210355433 A CN 202210355433A CN 114816736 A CN114816736 A CN 114816736A
Authority
CN
China
Prior art keywords
module
flow
level
business
service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210355433.3A
Other languages
Chinese (zh)
Inventor
郝婧雯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202210355433.3A priority Critical patent/CN114816736A/en
Publication of CN114816736A publication Critical patent/CN114816736A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The present disclosure provides a business processing method, which can be applied to the technical field of computers and the financial field. The method comprises the following steps: receiving a service processing request of a service to be processed, wherein the service processing request carries a target service scene identifier and a target service parameter; responding to a service processing request, determining a target process model from a plurality of process models according to a target service scene identifier, wherein the process model comprises a plurality of process modules positioned at N levels, and N is a positive integer greater than 2; generating a business process example of the business to be processed based on the target process model and the target business parameters; and distributing the plurality of process modules of the business process instance to a plurality of hosts of the distributed system by using a preset load balancing strategy, wherein the plurality of hosts process the business to be processed. In addition, the disclosure also provides a service processing device, equipment and medium.

Description

Service processing method, device, equipment and medium
Technical Field
The present disclosure relates to the field of computer technology and the field of finance, and more particularly, to a business processing method, a business processing apparatus, an electronic device, a storage medium, and a program product.
Background
The Activiti is a new Apache license-based open source BPM (Activiti business process management) platform, supports the BPMN 2.0 standard, and can be suitable for a business system with strong process. The Activiti can not only make the data in the process persistent, but also provide seven Service interfaces, including process warehouse Service, identity Service, runtime Service, task Service, form Service, history Service and engine management Service. In addition, activti can also easily perform Spring integration, which is convenient for managing affairs and expressions. Although activti is a business system with strong process, in the process of implementing the concept of the present disclosure, the inventors found that at least the following problems exist in the related art: in a distributed application scenario, a process model obtained through service process configuration of a process-data two-layer model structure has large cutting granularity and is difficult to perform reasonable segmentation, so that load balance of a host is difficult to realize, and the usability and flexibility of a service process system are poor.
Disclosure of Invention
In view of the above, the present disclosure provides a service processing method, a service processing apparatus, an electronic device, a readable storage medium, and a computer program product.
One aspect of the present disclosure provides a service processing method, including: receiving a service processing request of a service to be processed, wherein the service processing request carries a target service scene identifier and a target service parameter; responding to the service processing request, and determining a target process model from a plurality of process models according to the target service scene identification, wherein the process model comprises a plurality of process modules positioned at N levels, and N is a positive integer greater than 2; generating a business process instance of the business to be processed based on the target process model and the target business parameters; and distributing a plurality of process modules of the business process example to a plurality of hosts of a distributed system by using a preset load balancing strategy, wherein the hosts process the business to be processed.
According to an embodiment of the present disclosure, the method further includes: responding to the received configuration operation of the user, and constructing the flow model; responding to a storage instruction aiming at the flow model, and acquiring a service scene identifier carried in the storage instruction; and storing the process model into a database by taking the service scene identifier as a main key.
According to an embodiment of the present disclosure, the building the process model in response to receiving the configuration operation of the user includes: for a flow module of an Nth level, responding to a received first configuration operation of a user, and configuring module attributes of the flow module, wherein the module attributes comprise a module name, a calling address of an executable program and a resource occupation amount attribute; for a flow module of any level from the level 1 to the level N-1, responding to the received second configuration operation of the user, and configuring the module attribute and the flow attribute of the flow module; and constructing and obtaining the process model based on the module attributes of the flow module at the Nth level and the module attributes and the flow attributes of all the flow modules from the 1 st level to the N-1 st level.
According to an embodiment of the present disclosure, wherein the first configuration operation includes an input operation; wherein, for the flow module of the nth level, in response to receiving the first configuration operation of the user, configuring the module attribute of the flow module includes: and configuring, for the flow module of the nth level, a module attribute of the flow module of the nth level based on input information carried in the input operation in response to the input operation by the user.
According to an embodiment of the present disclosure, the second configuration operation includes a selection operation and a connection operation; wherein, for any flow module in the 1 st level to the N-1 th level, in response to receiving a second configuration operation of the user, configuring the module attribute and the flow attribute of the flow module includes: for the flow module of the M level, responding to the selection operation of a user, determining at least one flow module of the M +1 level associated with the flow module of the M level, wherein M is a positive integer smaller than N; determining a module attribute of the flow module of the M +1 th level based on a module attribute of at least one flow module of the M +1 th level; drawing at least one graph associated with at least one flow module of the M +1 th level on a display interface; responding to the connection operation of the user aiming at least one graph, and determining a relationship type corresponding to the connection operation; and configuring the process attribute of the flow module of the M-th level based on the relationship type.
According to an embodiment of the present disclosure, for the flow module at the M-th level, the resource occupation property of the flow module at the M-th level is characterized by a weighted sum of the resource occupation properties of at least one flow module at the M + 1-th level associated with the flow module at the M-th level; wherein, the allocating the plurality of process modules of the business process instance to the plurality of hosts of the distributed system by using the preset load balancing policy includes: and distributing the flow modules to a plurality of hosts of the distributed system based on the resource occupation quantity attributes of the flow modules.
Another aspect of the present disclosure provides a service processing apparatus, including: the system comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for receiving a service processing request of a service to be processed, and the service processing request carries a target service scene identifier and a target service parameter; a determining module, configured to determine, in response to the service processing request, a target process model from multiple process models according to the target service scenario identifier, where the process model includes multiple process modules located at N levels, where N is a positive integer greater than 2; a generating module, configured to generate a business process instance of the to-be-processed business based on the target process model and the target business parameter; and the distribution module is used for distributing the plurality of process modules of the business process example to a plurality of hosts of a distributed system by using a preset load balancing strategy, wherein the hosts process the business to be processed.
Another aspect of the present disclosure provides an electronic device including: one or more processors; a memory for storing one or more instructions, wherein the one or more instructions, when executed by the one or more processors, cause the one or more processors to implement the method as described above.
Another aspect of the present disclosure provides a computer-readable storage medium having stored thereon executable instructions that, when executed by a processor, cause the processor to implement the method as described above.
Another aspect of the disclosure provides a computer program product comprising computer executable instructions for implementing the method as described above when executed.
According to the embodiment of the disclosure, by adopting a mode of converting the service to be processed into the service process example containing the plurality of cascaded process modules, the service to be processed can be split into the plurality of process modules, thereby being beneficial to realizing the decoupling of the service process and simplifying the logic of the process configuration; and then, a preset load balancing strategy is used for distributing a plurality of process modules in the business process instance to a plurality of hosts of a distributed system, the hosts process the business to be processed, and the hosts process the business to be processed, so that the reasonable distribution of the business to be processed can be realized, the load balancing of the hosts can be realized, the availability and the flexibility of the business process are improved, and the processing efficiency of the business process is improved, thereby at least partially overcoming the problems of large cutting granularity, difficulty in reasonable segmentation, difficulty in realizing load balancing, low availability and low flexibility of the business process in the related technology.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments of the present disclosure with reference to the accompanying drawings, in which:
fig. 1 schematically illustrates an exemplary system architecture to which the traffic processing method and apparatus may be applied, according to an embodiment of the present disclosure;
FIG. 2 schematically shows a flow diagram of a traffic handling method according to an embodiment of the present disclosure;
FIG. 3 schematically shows a flow chart of a method of generating a flow model according to an embodiment of the present disclosure;
FIG. 4 schematically shows a flow chart of a method of generating a flow model according to another embodiment of the present disclosure;
FIG. 5 schematically shows an architecture diagram of a flow model according to an embodiment of the disclosure;
FIG. 6 schematically shows a flow diagram of a traffic handling method according to another embodiment of the present disclosure;
fig. 7 schematically shows a block diagram of a traffic processing device according to an embodiment of the present disclosure; and
fig. 8 schematically shows a block diagram of an electronic device adapted to implement a traffic processing method according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
The Activiti is a new Apache license-based open source BPM (Activiti business process management) platform, supports the BPMN 2.0 standard, and is suitable for business systems with strong process performance. Activiti can use the optimal sql statement to execute Command through myBatis, so that the bottleneck in the data exchange process with the database is solved, the engine can maintain the optimal performance in speed, and efficient data persistence is achieved. There are specialized flow designers for different software developers, such as Eclipse Designer for Eclipse, actiBPM plug-in for IDEA, and Web-based Activti Modeler flow Designer. In addition, the Activiti can also separate the data in operation from the historical data in the aspect of designing the table structure, the data in operation can be quickly read, and the data can be read from the historical record storage table only under the condition that the historical data needs to be inquired.
Although activti is a service system that can be applied to a high-flow performance, in a distributed application scenario in the related art, a flow model obtained through service flow configuration of a flow-data two-layer model structure has a large cutting granularity, and is difficult to perform reasonable segmentation, so that load balancing of a host is difficult to achieve, the availability and flexibility of the service flow system are poor, and the processing efficiency of the service flow system is low.
In view of this, the method changes the traditional business process with a flow-data two-layer model structure into a business process with a multilayer nested structure, so as to split the business to be processed, facilitate the business system to perform finer flow segmentation and realize the decoupling of the business process; and then, according to the selected load balancing mode and the occupied resource amount of each module configured in the process model, each module is distributed to different hosts for execution, so that the load balancing of the hosts is realized while the reasonable distribution of the service to be processed is realized, the availability and flexibility of the service process are improved, and the processing efficiency of the processing process is improved.
Specifically, embodiments of the present disclosure provide a service processing method, a service processing apparatus, an electronic device, a readable storage medium, and a computer program product. The method can be used for improving the availability and flexibility of the business process system and improving the processing efficiency of the processing process system. Receiving a service processing request of a service to be processed, wherein the service processing request carries a target service scene identifier and a target service parameter; responding to a service processing request, and determining a target process model from a plurality of process models according to a target service scene identifier, wherein the process model comprises a plurality of process modules positioned at N levels, and N is a positive integer greater than 2; generating a business process example of the business to be processed based on the target process model and the target business parameters; and distributing the plurality of process modules of the business process instance to a plurality of hosts of the distributed system by using a preset load balancing strategy, wherein the plurality of hosts process the business to be processed.
It should be noted that the service processing method and apparatus determined in the embodiments of the present disclosure may be used in the field of computer technology or the field of finance, and may also be used in any field other than the field of computer technology and the field of finance, and the specific application field is not limited.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure, application and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations, necessary confidentiality measures are taken, and the customs of the public order is not violated.
In the technical scheme of the disclosure, before the personal information of the user is obtained or collected, the authorization or the consent of the user is obtained.
Fig. 1 schematically illustrates an exemplary system architecture to which the traffic processing method and apparatus may be applied, according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, the system architecture 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104, a load balancing device 105, and a plurality of servers 106. The network 104 is used to provide a medium for communication links between the terminal devices 101, 102, 103 and the load balancing device 105, and between the load balancing device 105 and the plurality of servers 106. Network 104 may include various connection types, such as wired and/or wireless communication links, and so forth.
The user may use the terminal devices 101, 102, and 103 to interact with the load balancing device 105 through the network 104 to send a service processing request including a service to be processed, where the service processing request includes a service scene identifier and a service parameter. The terminal devices 101, 102, 103 may have various financial applications, communication client applications installed thereon, such as shopping applications, web browser applications, search applications, instant messaging tools, mailbox clients, and/or social platform software, etc. (by way of example only).
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The load balancing device 105 may be a load balancing server that implements load balancing, such as an Ngnix server, and the load balancing device 105 provides processing for service processing requests sent by users using the terminal devices 101, 102, 103 and distributes the processing to the background management devices of the plurality of servers 106 (for example only).
The server 106 may be a server providing various services, such as a background management server (for example only) that may receive processing tasks assigned by the load balancing device 105 and process the traffic to be processed. The background management device may analyze and perform other processing on the received data such as the user service processing request, and feed back a processing result (e.g., a webpage, information, or data obtained or generated according to the user service processing request) to the terminal device.
It should be noted that the service processing method provided by the embodiment of the present disclosure may be generally executed by the server 106. Accordingly, the service processing device provided by the embodiment of the present disclosure may be generally disposed in the server 106. The service processing method provided by the embodiment of the present disclosure may also be executed by a server or a server cluster that is different from the server 106 and is capable of communicating with the terminal devices 101, 102, 103, the load balancing device 105, and/or the server 106. Correspondingly, the service processing apparatus provided in the embodiment of the present disclosure may also be disposed in a server or a server cluster that is different from the server 106 and is capable of communicating with the terminal devices 101, 102, and 103, the load balancing device 105, and/or the server 106. Alternatively, the service processing method provided by the embodiment of the present disclosure may also be executed by the terminal device 101, 102, or 103, or may also be executed by another terminal device different from the terminal device 101, 102, or 103. Correspondingly, the service processing apparatus provided by the embodiment of the present disclosure may also be disposed in the terminal device 101, 102, or 103, or in another terminal device different from the terminal device 101, 102, or 103.
For example, the pending service request may be originally stored in any one of the terminal devices 101, 102, or 103 (for example, but not limited to, the terminal device 101), or stored on an external storage device and may be imported into the terminal device 101. Then, the terminal device 101 may locally execute the service processing method provided by the embodiment of the present disclosure, or send the service request to be processed to another terminal device, server, or server cluster, and execute the service processing method provided by the embodiment of the present disclosure by another terminal device, server, or server cluster that receives the service request to be processed.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Fig. 2 schematically shows a flow chart of a traffic processing method according to an embodiment of the present disclosure.
As shown in fig. 2, the method includes operations S201 to S204.
In operation S201, a service processing request of a service to be processed is received, where the service processing request carries a target service scene identifier and a target service parameter.
In operation S202, in response to the service processing request, a target process model is determined from a plurality of process models according to the target service scenario identifier, where the process model includes a plurality of process modules located at N levels, and N is a positive integer greater than 2.
In operation S203, a business process instance of the to-be-processed business is generated based on the target process model and the target business parameters.
In operation S204, a preset load balancing policy is used to allocate a plurality of process modules of the service process instance to a plurality of hosts of the distributed system, where the hosts process the service to be processed.
According to the embodiment of the present disclosure, the service scenario identifier may be a unique identifier for one service process in the service system. The target business parameter may be a parameter related to the business process, for example, a parameter including credential information, an originating channel, and the like.
According to an embodiment of the present disclosure, the process model may output an execution process associated with the business according to the input business processing request. The process model can comprise N levels, each level comprises a plurality of process modules, and each process module can have an association relationship; the value of N can be adaptively adjusted according to actual conditions, for example, according to the requirements of users. The target process model may be a model associated with a target business scenario identification.
According to an embodiment of the present disclosure, a business process instance may be a process that can be used to process the business process request. The business process instance may be a specific process flow corresponding to the target business scenario and target business parameters and associated with the business.
According to the embodiment of the disclosure, the preset load balancing strategy is used for distributing a plurality of flow modules in the business flow instance to a plurality of machines for processing in a balanced manner, and further completing the work task in a coordinated manner. Load balancing policies may include, but are not limited to: adaptive load balancing, weighted round robin, etc. Specifically, the adaptive load balancing policy may be that the load balancing device detects load conditions of all servers at regular time, and automatically adjusts the weight of the server according to the load conditions of the server. Or may also be combined with a weighted round robin strategy. Specifically, load distribution may be calculated based on a static weight ratio, and when the load of all servers is lower than a set lower limit, the load balancing device may automatically switch to a weighted round robin policy for distribution; and when the load of all the servers is higher than the set lower limit, the load balancing equipment automatically switches to the adaptive load balancing strategy for distribution. The load balancing provides a working mode which can effectively strengthen the processing capacity of the business process and improve the availability and flexibility of the business process system.
According to the embodiment of the disclosure, by adopting a mode of converting the service to be processed into the service process example containing the plurality of cascaded process modules, the service to be processed can be split into the plurality of process modules, thereby being beneficial to realizing the decoupling of the service process and simplifying the logic of the process configuration; and then, a preset load balancing strategy is used for distributing a plurality of process modules in the business process instance to a plurality of hosts of a distributed system, the hosts process the business to be processed, and the hosts process the business to be processed, so that the reasonable distribution of the business to be processed can be realized, the load balancing of the hosts can be realized, the availability and the flexibility of the business process are improved, and the processing efficiency of the business process is improved, thereby at least partially overcoming the problems of complicated logic, low availability, low flexibility, low processing efficiency and difficult realization of the load balancing of the business process in the related technology.
The method shown in fig. 2 is further described with reference to fig. 3-6 in conjunction with specific embodiments.
According to the embodiment of the present disclosure, in order to meet the requirement of more business scenarios, the multiple process models in step S202 may also be constructed in advance, so before determining the target process model from the multiple process models, the method may further include: building a process model in response to receiving a configuration operation of a user; responding to a storage instruction aiming at the flow model, and acquiring a service scene identifier carried in the storage instruction; and storing the process model into a database by taking the service scene identification as a main key.
According to the embodiment of the disclosure, the configuration operation may be used to build the process model, and the configuration operation may include a module name, an executable program calling address, a module priority, an execution sequence between modules, and the like, which are related to the process module.
According to an embodiment of the present disclosure, the storage instruction may be an instruction for storing the process model in a corresponding service scenario, and the storage instruction may further include a service scenario identifier and a service parameter associated with the process model, where the service scenario identifier may be a unique identifier referred to when the process model is stored.
Fig. 3 schematically shows a flow chart of a method of generating a flow model according to an embodiment of the present disclosure.
As shown in FIG. 3, building the flow model may further include operations S301-S303 in response to receiving a configuration operation by a user.
In operation S301, for a flow module at an nth level, in response to receiving a first configuration operation of a user, module attributes of the flow module are configured, where the module attributes include a module name, a calling address of an executable program, and a resource occupancy attribute.
In operation S302, for a flow module of any hierarchy level from the 1 st hierarchy level to the N-1 st hierarchy level, a module attribute and a flow attribute of the flow module are configured in response to receiving a second configuration operation of the user.
In operation S303, a process model is constructed based on the module attributes of the flow module at the nth level, and the module attributes and the process attributes of all the flow modules in the 1 st level to the N-1 st level.
According to an embodiment of the present disclosure, the configuration operation may be further divided into a first configuration operation and a second configuration operation. Specifically, the first configuration operation may be an operation for configuring a module name of the flow module, a call address of an executable program of the flow module, a resource occupation amount, a preset balanced load policy, a priority of the flow module, an execution host of the flow module, and the like. The second configuration operation may include the first configuration operation, and may also be an operation for configuring an execution order, an execution condition, and the like between the respective flow modules.
According to the embodiment of the disclosure, the module attribute may include a module name, a calling address and a resource occupation amount attribute of the executable program, a preset balanced load policy, a priority of the flow module, an execution host of the flow module, and the like; the flow attribute may include the execution order, execution condition, and the like between the respective flow modules.
Fig. 4 schematically shows a flow chart of a method of generating a flow model according to another embodiment of the present disclosure.
As shown in fig. 4, the method for generating the flow model includes operations S401 to S410.
In operation S401, for the flow module at the nth level, a module attribute of the flow module at the nth level may also be configured based on input information carried in an input operation according to the input operation in response to the user.
In operation S402, for the flow module of the M-th hierarchy, in response to a selection operation of a user, at least one flow module of an M + 1-th hierarchy associated with the flow module of the M-th hierarchy is determined, where M is a positive integer less than N.
In operation S403, a module attribute of the flow module of the mth hierarchy is determined based on the module attribute of the at least one flow module of the M +1 th hierarchy.
In operation S404, at least one graphic associated with at least one flow module of the M +1 th hierarchy is drawn on the display interface.
In operation S405, in response to a connection operation of a user with respect to at least one graphic, a relationship type corresponding to the connection operation is determined.
In operation S406, a flow attribute of the flow module of the mth hierarchy is configured based on the relationship type.
In operation S407, it is determined whether the configuration of the flow module of the mth hierarchy is completed.
In operation S408, it is determined whether the value of M is greater than 1.
In operation S409, a difference between the value of M and 1 is calculated to obtain a new value of M.
In operation S410, generation of the flow model is completed.
According to an embodiment of the present disclosure, the input operation may be implemented as a first configuration operation. For example, the module name, the call address of the executable program of the flow module, the resource occupation amount, the preset balanced load policy, the priority of the flow module, the execution host of the module, and the like may be directly input, so as to directly implement the configuration of the module attribute of the flow module at the nth level. The module attribute configuration of the flow module of the Nth layer can be realized more simply and efficiently through an input mode.
According to an embodiment of the present disclosure, the second configuration operation may be implemented by means of a selection operation or a connection operation. For example, in the process of configuring the process attributes and module attributes of the process module, the process attributes and module attributes of the process module are established in a display interface popped up and related to the construction of the process module by clicking, dragging, pulling and dragging.
According to an embodiment of the present disclosure, the flow module of the mth hierarchy may be a flow module of any intermediate hierarchy among the 1 st hierarchy to the nth hierarchy. When configuring the flow module of the M +1 th level, the flow module of the M +1 th level and the associated relationship component may be loaded first, and then the module attribute and/or the flow attribute of the flow module of the M +1 th level may be determined based on the module attribute and/or the flow attribute of the flow module of the M +1 th level.
According to the embodiment of the disclosure, in the case that a graph associated with at least one flow module of the M +1 th level exists on the display interface, the flow module of the M +1 th level may be edited in terms of flow attributes in a dragging, pulling and dragging manner, so as to connect the flow modules, and further determine the execution sequence or the execution condition and the like among the flow modules of the M +1 th level. For example, a plurality of flow modules of the M +1 th level may be connected in series or in parallel, or may be executed according to a designed branch condition, so that the configuration of the flow attribute of the flow module of the M +1 th level may be realized. The relationship type may be used to indicate the execution order between the flows, and may be, for example, a series relationship, a parallel relationship, or the like.
According to an embodiment of the present disclosure, in operation S407, in the case that it is determined that the process modules of the mth hierarchy do not complete the configuration, the execution may continue from operation S402 until all the process modules in the mth hierarchy complete the module attributes and/or the configuration of the process attributes. And under the condition that all the process modules of the M-th level are judged to complete the module attribute and/or the process attribute configuration, the subsequent operation can be continuously executed.
According to an embodiment of the present disclosure, in generating the flow model, the flow model may be generated in an order of generation starting from the nth level to the 1 st level. Under the condition that the configuration of the module attribute and/or the process attribute is completed in all the process modules of the M-th level, it may be determined whether the value of M is greater than 1, and if the value of M is still greater than 1, it may indicate that the configuration of the process modules of other levels in the process model has not been completed, and then the difference between M and 1 needs to be calculated to obtain a new value of M, and then the execution is continued from operation S402 according to the new value of M until the value of M is less than or equal to 1.
According to the embodiment of the disclosure, if the value of M is less than or equal to 1, it may indicate that the module attribute and/or the process attribute of the process module in the level 1 have been configured, indicating that the process model has been generated.
According to the embodiment of the disclosure, the process model is designed into the business process which can have a multilayer nested structure, the business process is divided into smaller modules, the logic of the process model is simplified, the decoupling of the business process is facilitated, and the availability and the flexibility of a business process system are improved.
According to an embodiment of the present disclosure, operation S204 may further include the operations of: and distributing the plurality of process modules to a plurality of hosts of the distributed system based on the resource occupation amount attributes of the plurality of process modules.
FIG. 5 schematically shows an architecture diagram of a flow model according to an embodiment of the disclosure.
As shown in fig. 5, the process model may include N levels of process modules, and a plurality of process modules may be provided at each of the N levels. When N is greater than 2, the flow module at the nth level may be a flow module having only the module attribute, the flow module at the 1 st level may be a flow module having only the flow attribute, and the flow modules at the 2 nd level to the N-1 st level may be flow modules including both the module attribute and the flow attribute. The flow module at the M-th level in fig. 5 can not only run the entity program, but also serve as an execution association module of the flow module at the M + 1-th level.
As shown in fig. 5, for a flow module at an M-th level, the resource occupancy attribute of the flow module at the M-th level may be characterized as a weighted sum of the resource occupancy attributes of at least one flow module at an M + 1-th level associated with the flow module at the M-th level.
According to the embodiment of the disclosure, by using the load balancing strategy based on the resource occupation amount, the reasonable allocation of resources among the flow modules can be realized, the load balancing among the flow modules is realized, the availability and flexibility of the business process are further improved, and the processing efficiency of the business process is improved.
According to the embodiment of the disclosure, the process model is designed into the business process which can have a multilayer nested structure, the business process is divided into smaller modules, the logic of the process model is simplified, the decoupling of the business process is facilitated, and the availability and the flexibility of a business process system are improved.
Fig. 6 schematically shows a flow chart of a traffic processing method according to another embodiment of the present disclosure.
As shown in fig. 6, the service processing method includes operations S601 to S609.
In operation S601, in response to receiving a configuration requirement of a user, building a process model is started.
According to an embodiment of the present disclosure, the configuration requirement may be a requirement related to building a process model, and the configuration requirement may include a module name, an executable program calling address, a module priority, an execution sequence between models, an execution host of a module, and the like, which are related to the process module.
In operation S602, for an nth level flow module, in response to receiving a first configuration operation of a user, a module attribute of the flow module is configured, where N is a positive integer greater than 2. In an embodiment, operation S602 may refer to operation S301.
In operation S603, for the flow modules of the 2 nd to N-1 th hierarchies, the module attributes and the flow attributes of the flow modules are configured in response to the selection operation and the connection operation by the user. In an embodiment, operation S603 may refer to operation S302, or to operations S501 to S505.
In operation S604, it is determined whether the number of layer levels of the flow module needs to be increased.
According to the embodiment of the present disclosure, it may be determined whether the number of layer levels of the flow module needs to be increased according to the needs of the user, and if the number needs to be increased, the execution is continued from operation S603 until the needs of the user are met; and continuing to execute subsequent operations without increasing.
In operation S605, a flow attribute of the flow module of the 1 st level is configured based on a connection operation to the flow module of the 2 nd level.
According to an embodiment of the present disclosure, the flow module at level 1 may include only flow attributes, and specifically, the flow attributes may include information such as names and notes of the flow model.
In operation S606, the process model is constructed and issued.
In operation S607, a start-up program associated with the flow model is configured, and the flow model is started up.
According to an embodiment of the present disclosure, after the construction of the process model is completed, a startup program related to a startup process may be written for starting the process model. The startup procedure can be associated with the name of the process model, facilitating a more rapid and efficient startup of the process model.
In operation S608, a plurality of process modules in the process model are distributed to a plurality of hosts of the distributed system using a preset load balancing policy, and the plurality of hosts execute the plurality of process modules.
In operation S609, execution results of the plurality of flow modules executed by the plurality of hosts are output.
According to an embodiment of the disclosure, in an embodiment, operation S608 may refer to operation S204, and the process model may execute according to the configured process attribute and the module attribute, for example, according to the configured module priority, the module sorting, or the branch condition.
It should be noted that, unless explicitly stated that there is an execution sequence between different operations or there is an execution sequence between different operations in technical implementation, the execution sequence between multiple operations may not be sequential, or multiple operations may be executed simultaneously in the flowchart in this disclosure.
Fig. 7 schematically shows a block diagram of a traffic processing device according to an embodiment of the present disclosure.
As shown in fig. 7, the traffic processing apparatus 700 includes a receiving module 710, a determining module 720, a generating module 730, and an allocating module 740.
The receiving module 710 is configured to receive a service processing request of a service to be processed, where the service processing request carries a target service scene identifier and a target service parameter.
The determining module 720 is configured to determine, in response to the service processing request, a target process model from a plurality of process models according to the target service scenario identifier, where the process model includes a plurality of process modules located at N levels, where N is a positive integer greater than 2.
The generating module 730 is configured to generate a business process instance of the to-be-processed business based on the target process model and the target business parameter.
The allocating module 740 is configured to allocate the multiple process modules of the service process instance to multiple hosts of a distributed system using a preset load balancing policy, where the multiple hosts process the service to be processed.
According to the embodiment of the disclosure, the service processing device further comprises a construction module, an acquisition module and a storage module.
The building module is used for responding to the received configuration operation of the user and building the process model.
The acquisition module is used for responding to a storage instruction aiming at the process model and acquiring the service scene identification carried in the storage instruction.
And the storage module is used for storing the process model into a database by taking the service scene identifier as a main key.
According to an embodiment of the present disclosure, a building module may include a first configuration unit, a second configuration unit, and a building unit.
The first configuration unit is used for responding to the received first configuration operation of a user for the flow module at the Nth level and configuring module attributes of the flow module, wherein the module attributes comprise a module name, a calling address of an executable program and resource occupation quantity attributes;
the second configuration unit is used for responding to the second configuration operation received by the user for the flow module of any level from the level 1 to the level N-1, and configuring the module attribute and the flow attribute of the flow module;
the construction unit is used for constructing and obtaining the process model based on the module attributes of the flow module at the Nth level and the module attributes and the process attributes of all the flow modules from the 1 st level to the N-1 st level.
According to an embodiment of the present disclosure, the first configuration unit may further include a first configuration subunit.
A first configuration subunit, configured to, for the flow module at the nth level, respond to an input operation of the user, and configure a module attribute of the flow module at the nth level based on input information carried in the input operation.
According to the embodiment of the present disclosure, the second configuration unit includes a first determination subunit, a second determination subunit, a drawing subunit, a third determination subunit, and a second configuration subunit.
The first determining subunit is configured to determine, for the flow module at the M-th level, in response to a selection operation of a user, at least one flow module at an M + 1-th level associated with the flow module at the M-th level, where M is a positive integer smaller than N.
The second determining subunit is configured to determine a module attribute of the flow module at the M +1 th level based on a module attribute of at least one of the flow modules at the M +1 th level.
The drawing subunit is used for drawing at least one graph associated with at least one flow module of the M +1 th level on a display interface.
The third determining subunit is configured to determine, in response to a connection operation of the user for at least one of the graphics, a relationship type corresponding to the connection operation.
The second configuration subunit is configured to configure a process attribute of the process module of the mth hierarchy based on the relationship type.
According to an embodiment of the present disclosure, the allocation module may further include an allocation unit.
The distribution unit is used for distributing the flow modules to the hosts of the distributed system based on the resource occupation quantity attributes of the flow modules.
Any number of modules, sub-modules, units, sub-units, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware implementations. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the disclosure may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
For example, any number of the receiving module 710, the determining module 720, the generating module 730, and the allocating module 740 may be combined and implemented in one module/unit/sub-unit, or any one of the modules/units/sub-units may be split into a plurality of modules/units/sub-units. Alternatively, at least part of the functionality of one or more of these modules/units/sub-units may be combined with at least part of the functionality of other modules/units/sub-units and implemented in one module/unit/sub-unit. According to an embodiment of the present disclosure, at least one of the receiving module 710, the determining module 720, the generating module 730, and the allocating module 740 may be implemented at least partially as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or in any one of three implementations of software, hardware, and firmware, or in a suitable combination of any of them. Alternatively, at least one of the receiving module 710, the determining module 720, the generating module 730, and the allocating module 740 may be at least partially implemented as a computer program module, which when executed, may perform a corresponding function.
It should be noted that, the service processing apparatus portion in the embodiment of the present disclosure corresponds to the service processing method portion in the embodiment of the present disclosure, and the description of the service processing apparatus portion specifically refers to the service processing method portion, which is not described herein again.
Fig. 8 schematically shows a block diagram of an electronic device adapted to implement a traffic processing method according to an embodiment of the present disclosure. The electronic device shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 8, a computer electronic device 800 according to an embodiment of the present disclosure includes a processor 801 which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage section 808 into a Random Access Memory (RAM) 803. The processor 801 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 801 may also include onboard memory for caching purposes. The processor 801 may include a single processing unit or multiple processing units for performing different actions of the method flows according to embodiments of the present disclosure.
In the RAM 803, various programs and data necessary for the operation of the electronic apparatus 800 are stored. The processor 801, the ROM802, and the RAM 803 are connected to each other by a bus 804. The processor 801 performs various operations of the method flows according to the embodiments of the present disclosure by executing programs in the ROM802 and/or RAM 803. Note that the programs may also be stored in one or more memories other than the ROM802 and RAM 803. The processor 801 may also perform various operations of method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
Electronic device 800 may also include input/output (I/O) interface 805, input/output (I/O) interface 805 also connected to bus 804, according to an embodiment of the present disclosure. Electronic device 800 may also include one or more of the following components connected to I/O interface 805: an input portion 806 including a keyboard, a mouse, and the like; an output section 807 including a signal such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 808 including a hard disk and the like; and a communication section 809 including a network interface card such as a LAN card, a modem, or the like. The communication section 809 performs communication processing via a network such as the internet. A drive 810 is also connected to the I/O interface 805 as necessary. A removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 810 as necessary, so that a computer program read out therefrom is mounted on the storage section 808 as necessary.
According to embodiments of the present disclosure, method flows according to embodiments of the present disclosure may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 809 and/or installed from the removable medium 811. The computer program, when executed by the processor 801, performs the above-described functions defined in the system of the embodiments of the present disclosure. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to an embodiment of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium. Examples may include, but are not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
For example, according to embodiments of the present disclosure, a computer-readable storage medium may include the ROM802 and/or RAM 803 described above and/or one or more memories other than the ROM802 and RAM 803.
Embodiments of the present disclosure also include a computer program product comprising a computer program containing program code for performing the method provided by the embodiments of the present disclosure, when the computer program product is run on an electronic device, the program code being configured to cause the electronic device to implement the business processing method provided by the embodiments of the present disclosure.
The computer program, when executed by the processor 801, performs the above-described functions defined in the system/apparatus of the embodiments of the present disclosure. The systems, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
In one embodiment, the computer program may be hosted on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted in the form of a signal on a network medium, distributed, downloaded and installed via communication section 809, and/or installed from removable media 811. The computer program containing program code may be transmitted using any suitable network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In accordance with embodiments of the present disclosure, program code for executing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, these computer programs may be implemented using high level procedural and/or object oriented programming languages, and/or assembly/machine languages. The programming language includes, but is not limited to, programming languages such as Java, C + +, python, the "C" language, or the like. The program code may execute entirely on the user computing device, partly on the user device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It will be appreciated by those skilled in the art that various combinations and/or combinations of the features recited in the various embodiments of the disclosure and/or the claims may be made even if such combinations or combinations are not explicitly recited in the disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (10)

1. A service processing method comprises the following steps:
receiving a service processing request of a service to be processed, wherein the service processing request carries a target service scene identifier and a target service parameter;
in response to the service processing request, determining a target process model from a plurality of process models according to the target service scene identification, wherein the process model comprises a plurality of process modules located at N levels, and N is a positive integer greater than 2;
generating a business process instance of the business to be processed based on the target process model and the target business parameters; and
and distributing the plurality of process modules of the business process instance to a plurality of hosts of a distributed system by using a preset load balancing strategy, wherein the hosts process the business to be processed.
2. The method of claim 1, further comprising:
building the process model in response to receiving a configuration operation of a user;
responding to a storage instruction for triggering the flow model, and acquiring a service scene identifier carried in the storage instruction; and
and taking the service scene identification as a main key, and storing the process model into a database.
3. The method of claim 2, wherein building the process model in response to receiving a configuration operation by a user comprises:
for a flow module of an Nth level, responding to the received first configuration operation of a user, and configuring module attributes of the flow module, wherein the module attributes comprise a module name, a calling address of an executable program and a resource occupation amount attribute;
for a flow module of any level from level 1 to level N-1, configuring module attributes and flow attributes of the flow module in response to receiving a second configuration operation of the user; and
and constructing the process model based on the module attributes of the flow module at the Nth level and the module attributes and the flow attributes of all the flow modules from the 1 st level to the N-1 st level.
4. The method of claim 3, wherein the first configuration operation comprises an input operation;
wherein, for the flow module at the Nth level, in response to receiving the first configuration operation of the user, configuring the module attribute of the flow module, including:
for the flow module of the Nth layer, responding to the input operation of the user, and configuring the module attribute of the flow module of the Nth layer based on the input information carried in the input operation.
5. The method of claim 3, wherein the second configuration operation comprises a selection operation and a connection operation;
wherein, for any flow module in the 1 st level to the N-1 st level, in response to receiving a second configuration operation of the user, configuring a module attribute and a flow attribute of the flow module, includes:
for a flow module at an M level, in response to a selection operation of a user, determining at least one flow module at an M +1 level associated with the flow module at the M level, wherein M is a positive integer smaller than N;
determining a module attribute of the flow module of the M +1 th level based on a module attribute of at least one of the flow modules of the M +1 th level;
drawing at least one graphic associated with at least one flow module of the M +1 th level on a display interface;
responding to the connection operation of the user for at least one graph, and determining a relationship type corresponding to the connection operation; and
configuring a process attribute of the process module of the M-th hierarchy based on the relationship type.
6. The method of claim 5, wherein, for the flow module of the M-th level, the resource occupancy attribute of the flow module of the M-th level is characterized as a weighted sum of the resource occupancy attributes of at least one of the flow modules of the M + 1-th level associated with the flow module of the M-th level;
wherein the allocating, using a preset load balancing policy, the plurality of process modules of the business process instance to the plurality of hosts of the distributed system comprises:
and distributing the flow modules to a plurality of hosts of the distributed system based on the resource occupation quantity attributes of the flow modules.
7. A traffic processing apparatus, comprising:
the system comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for receiving a service processing request of a service to be processed, and the service processing request carries a target service scene identifier and a target service parameter;
a determining module, configured to determine, in response to the service processing request, a target process model from multiple process models according to the target service scenario identifier, where the process model includes multiple process modules located at N levels, where N is a positive integer greater than 2;
the generating module is used for generating a business process example of the business to be processed based on the target process model and the target business parameters;
and the distribution module is used for distributing the plurality of process modules of the business process example to a plurality of hosts of a distributed system by using a preset load balancing strategy, wherein the hosts process the business to be processed.
8. An electronic device, comprising:
one or more processors;
a memory to store one or more instructions that,
wherein the one or more instructions, when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-6.
9. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to carry out the method of any one of claims 1 to 6.
10. A computer program product comprising computer executable instructions for implementing the method of any one of claims 1 to 6 when executed.
CN202210355433.3A 2022-04-06 2022-04-06 Service processing method, device, equipment and medium Pending CN114816736A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210355433.3A CN114816736A (en) 2022-04-06 2022-04-06 Service processing method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210355433.3A CN114816736A (en) 2022-04-06 2022-04-06 Service processing method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN114816736A true CN114816736A (en) 2022-07-29

Family

ID=82532813

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210355433.3A Pending CN114816736A (en) 2022-04-06 2022-04-06 Service processing method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN114816736A (en)

Similar Documents

Publication Publication Date Title
US9336059B2 (en) Forecasting capacity available for processing workloads in a networked computing environment
US9705758B2 (en) Management of cloud provider selection
US10764158B2 (en) Dynamic system level agreement provisioning
US20200082316A1 (en) Cognitive handling of workload requests
WO2008118859A1 (en) Methods and apparatus for dynamically allocating tasks
US20180191865A1 (en) Global cloud applications management
CN114253734A (en) Resource calling method and device, electronic equipment and computer readable storage medium
CN110706093A (en) Accounting processing method and device
US20140325077A1 (en) Command management in a networked computing environment
US10877805B2 (en) Optimization of memory usage by integration flows
US11418583B2 (en) Transaction process management by dynamic transaction aggregation
US20230196182A1 (en) Database resource management using predictive models
US9317328B2 (en) Strategic placement of jobs for spatial elasticity in a high-performance computing environment
US9280387B2 (en) Systems and methods for assigning code lines to clusters with storage and other constraints
CN114237765B (en) Functional component processing method, device, electronic equipment and medium
WO2022148376A1 (en) Edge time sharing across clusters via dynamic task migration
US11500399B2 (en) Adjustable control of fluid processing networks based on proportions of server effort
CN114548928A (en) Application auditing method, device, equipment and medium
US20220179709A1 (en) Scheduling jobs
CN114816736A (en) Service processing method, device, equipment and medium
US11651235B2 (en) Generating a candidate set of entities from a training set
CN114140091A (en) Operation record display method, device, equipment and medium
CN114363172B (en) Decoupling management method, device, equipment and medium for container group
CN115484149B (en) Network switching method, network switching device, electronic equipment and storage medium
WO2024099246A1 (en) Container cross-cluster capacity scaling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination