CN109523123B - Intelligent allocation method for distributed transaction processing and server - Google Patents

Intelligent allocation method for distributed transaction processing and server Download PDF

Info

Publication number
CN109523123B
CN109523123B CN201811191526.7A CN201811191526A CN109523123B CN 109523123 B CN109523123 B CN 109523123B CN 201811191526 A CN201811191526 A CN 201811191526A CN 109523123 B CN109523123 B CN 109523123B
Authority
CN
China
Prior art keywords
transaction
processing load
node
server
transactions
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811191526.7A
Other languages
Chinese (zh)
Other versions
CN109523123A (en
Inventor
曾维刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201811191526.7A priority Critical patent/CN109523123B/en
Publication of CN109523123A publication Critical patent/CN109523123A/en
Application granted granted Critical
Publication of CN109523123B publication Critical patent/CN109523123B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0633Workflow analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance

Abstract

The invention discloses an intelligent allocation method and a server for distributed transaction processing, wherein the intelligent allocation method comprises the following steps: acquiring transaction inflow information and transaction outflow information of a certain node; wherein the transaction inflow information includes transaction data of all first transactions flowing into the node, and the transaction outflow information includes transaction data of all second transactions flowing out of the node; determining the current processing load of the node according to the transaction inflow information and the transaction outflow information; judging whether the current processing load is larger than a processing load threshold value or not; if yes, transferring the transaction data of at least one project label transaction on the node to a replacement node; wherein the target transaction is at least one of all transactions except the second transaction in the first transaction on the node. Under the method, the processing load of the nodes can be predicted through the machine learning model, so that reasonable allocation of mass transactions is realized, and the current situation of transaction processing of lengthy tows is improved.

Description

Intelligent allocation method for distributed transaction processing and server
Technical Field
The present invention relates to the field of computer technologies, and in particular, to an intelligent deployment method and server for distributed transaction processing.
Background
Current transactions (such as insurance transactions) require a plurality of nodes to be processed, each node can be regarded as a server or a system, each node can form a processing chain according to the circulation sequence of the transactions, and the whole transaction data circulate among the nodes.
Taking insurance transaction as an example, the insurance transaction is required to be subjected to nodes such as a management department, a government department, a legal department, a risk assessment company and the like to carry out transaction data circulation, namely the insurance transaction may need to be checked on basic conditions of an applicant corresponding to the insurance transaction through a receiving department when being processed, the receiving department can transfer the insurance transaction to the related government department for further checking if doubt exists when checking the basic conditions of the applicant, and the government department can transfer the insurance transaction to the risk assessment company for risk assessment under the condition that the applicant conditions of the insurance transaction are judged to have certain risk; or when judging that the applicant condition of the insurance transaction needs to be subjected to legal evaluation, the insurance transaction is transferred to a legal department, and the legal department carries out relevant legal evaluation on the insurance transaction.
However, when insurance transactions are circulated between different nodes, the following often occur: the data of a large amount of insurance transactions flows into a certain node (such as a server of a risk assessment company) in the same period, so that the processing burden of the node is suddenly increased, the data of the large amount of insurance transactions are blocked to the node because the data cannot be processed in time, and the whole processing flow of the insurance transactions is long and tows.
Disclosure of Invention
In order to solve the technical problems that a large amount of insurance transaction data in the related art is blocked at a certain node because of being unable to be processed in time, and the whole insurance transaction processing flow is lengthy, the invention provides an intelligent allocation method for distributed transaction processing and a server.
An intelligent deployment method for distributed transaction processing, the method comprising:
the method comprises the steps that a server obtains transaction inflow information of a certain node and transaction outflow information of the node; wherein the transaction inflow information includes transaction data of all first transactions flowing into the node, and the transaction outflow information includes transaction data of all second transactions flowing out of the node;
the server determines the current processing load of the node according to the transaction inflow information and the transaction outflow information;
The server judges whether the current processing load is larger than a processing load threshold value;
and if the current processing load is greater than the processing load threshold, the server forwards transaction data of at least one item target transaction on the node to a replacement node, wherein the target transaction is at least one transaction in all transactions except the second transaction in a first transaction on the node.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the transaction inflow information further includes a transaction inflow log, where the transaction inflow log carries a transaction identification number of the first transaction, the transaction outflow information further includes a transaction outflow log, where the transaction outflow log carries a transaction identification number of the second transaction, and the server determines, according to the transaction inflow information and the transaction outflow information, a current processing load of the node, where the method includes:
the server determines the current transaction of the node according to the transaction identification number carried by the transaction inflow log and the transaction identification number carried by the transaction outflow log, wherein the current transaction is all transactions except the second transaction in the first transaction on the node;
The server takes the transaction data of the current transaction as the current processing load of the node.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the transaction inflow log further carries a transaction type of the first transaction, the transaction outflow log further carries a transaction type of the second transaction, and if the current processing load is greater than the processing load threshold, the server rolls out transaction data of at least one item-marked transaction on the node to an alternative node, including:
the server determines at least one project label transaction from the current transactions;
the server queries a preset transaction type and node comparison table to find out a replacement node with the same transaction type as the target transaction;
the server transfers the transaction data of the at least one project label transaction out to the replacement node so that the replacement node completes processing of the transaction data of the at least one project label transaction.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, after the server rolls out the transaction data of at least one project label transaction on the node to the replacement node, the method further includes:
The server acquires the flow direction of the target transaction;
and the server updates a flow record of the target transaction according to the flow direction of the target transaction, wherein the flow record is used for tracking and counting the circulation path of the target transaction.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the method further includes:
if the current processing load is not greater than the processing load threshold, the server inputs the current processing load to a processing load prediction model;
the server determines the predicted processing load of the node based on the output result of the processing load prediction model;
the server judges whether the predicted processing load is greater than the processing load threshold;
and if the predicted processing load is greater than the processing load threshold, the server executes the transferring of the transaction data of at least one project label transaction on the node to a replacement node.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, if the current processing load is not greater than the processing load threshold, before the server inputs the current processing load into the processing load prediction model, the method further includes:
The server acquires the historical processing load of the node;
and the server trains an initial neural network model by utilizing the historical processing load to obtain the processing load prediction model.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the training, by the server, an initial neural network model using the historical processing load to obtain the processing load prediction model includes:
the server acquires N continuous processing loads from the historical processing loads as samples to be input into an initial neural network model, so that the initial neural network model outputs output loads corresponding to the samples; the N is a positive integer;
the server determining the output load as a target processing load;
the server compares the target processing load with an actual processing load, and updates parameters of the initial neural network model according to a comparison result, wherein the actual processing load is the processing load detected by the server from the node at the moment of acquiring the target processing load;
the server judges whether a loss function of the initial neural network model meets a preset condition or not, wherein the loss function is used for representing an error between an output load and an actual value of the initial neural network model;
And if the loss function of the initial neural network model meets the preset condition, the server determines the current parameter of the initial neural network model as the parameter of a processing load prediction model, and obtains the processing load prediction model according to the parameter.
A server, the server comprising:
the acquisition module is used for acquiring transaction inflow information of a certain node and transaction outflow information of the node; wherein the transaction inflow information includes transaction data of all first transactions flowing into the node, and the transaction outflow information includes transaction data of all second transactions flowing out of the node;
the first determining module is used for determining the current processing load of the node according to the transaction inflow information and the transaction outflow information;
the judging module is used for judging whether the current processing load is larger than a processing load threshold value or not;
and the transfer module is used for transferring the transaction data of at least one item mark transaction on the node to a replacement node when the judging module judges that the current processing load is greater than the processing load threshold, wherein the target transaction is at least one transaction of all transactions except the second transaction in the first transaction on the node.
A third aspect of the embodiment of the present invention discloses a computer-readable storage medium storing a computer program, where the computer program causes a computer to execute an intelligent deployment method for distributed transactions disclosed in the first aspect of the embodiment of the present invention.
An embodiment of the present invention in a fourth aspect discloses an electronic device, including:
a processor;
a memory having stored thereon computer readable instructions which, when executed by the processor, implement a method as described above.
The technical scheme provided by the embodiment of the invention can comprise the following beneficial effects:
the intelligent allocation method for distributed transaction processing comprises the following steps of obtaining transaction inflow information of a certain node and transaction outflow information of the node; wherein the transaction inflow information includes transaction data of all first transactions of the inflow node, and the transaction outflow information includes transaction data of all second transactions of the outflow node; determining the current processing load of the node according to the transaction inflow information and the transaction outflow information; judging whether the current processing load is larger than a processing load threshold value or not; and if the current processing load is greater than the processing load threshold, transferring transaction data of at least one item target transaction on the node to a replacement node, wherein the target transaction is at least one transaction in all transactions except the second transaction in the first transaction on the node.
According to the method, the processing load of each node is monitored in real time, when transaction data of a large number of transactions are monitored to be blocked at a certain node, the transaction data of a target transaction exceeding the processing load of the node is transferred to a replacement node, so that the processing load among the nodes is reasonably balanced, and the current situation of transaction processing of lengthy tows is improved. In summary, intelligent deployment of distributed transactions is achieved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a schematic diagram of an apparatus according to an exemplary embodiment;
FIG. 2 is a flow chart illustrating a method of intelligent orchestration of distributed transactions according to an example embodiment;
FIG. 3 is a flow chart illustrating a method of intelligent orchestration of distributed transactions according to another example embodiment;
FIG. 4 is a block diagram of a server shown in accordance with an exemplary embodiment;
FIG. 5 is a block diagram of another server shown in accordance with an exemplary embodiment;
fig. 6 is a block diagram of yet another server shown in accordance with an exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the invention. Rather, they are merely examples of apparatus and methods consistent with aspects of the invention as detailed in the accompanying claims.
Fig. 1 is a schematic diagram of an apparatus according to an example embodiment. The apparatus 100 may be an electronic device. As shown in fig. 1, the apparatus 100 may include one or more of the following components: a processing component 102, a memory 104, a power supply component 106, a multimedia component 108, an audio component 110, a sensor component 114, and a communication component 116.
The processing component 102 generally controls overall operation of the device 100, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations, among others. The processing component 102 may include one or more processors 118 to execute instructions to perform all or part of the steps of the methods described below. Further, the processing component 102 can include one or more modules to facilitate interactions between the processing component 102 and other components. For example, the processing component 102 may include a multimedia module for facilitating interaction between the multimedia component 108 and the processing component 102.
The memory 104 is configured to store various types of data to support operations at the apparatus 100. Examples of such data include instructions for any application or method operating on the device 100. The Memory 104 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as static random access Memory (Static Random Access Memory, SRAM), electrically erasable Programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), erasable Programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), programmable Read-Only Memory (PROM), read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk, or optical disk. Also stored in the memory 104 are one or more modules configured to be executed by the one or more processors 118 to perform all or part of the steps in the methods shown below.
The power supply assembly 106 provides power to the various components of the device 100. The power components 106 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 100.
The multimedia component 108 includes a screen between the device 100 and the user that provides an output interface. In some embodiments, the screen may include a liquid crystal display (Liquid Crystal Display, LCD for short) and a touch panel. If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. The screen may also include an organic electroluminescent display (Organic Light Emitting Display, OLED for short).
The audio component 110 is configured to output and/or input audio signals. For example, the audio component 110 includes a Microphone (MIC) configured to receive external audio signals when the device 100 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 104 or transmitted via the communication component 116. In some embodiments, the audio component 110 further comprises a speaker for outputting audio signals.
The sensor assembly 114 includes one or more sensors for providing status assessment of various aspects of the device 100. For example, the sensor assembly 114 may detect an on/off state of the device 100, a relative positioning of the assemblies, the sensor assembly 114 may also detect a change in position of the device 100 or a component of the device 100, and a change in temperature of the device 100. In some embodiments, the sensor assembly 114 may also include a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 116 is configured to facilitate communication between the apparatus 100 and other devices in a wired or wireless manner. The device 100 may access a Wireless network based on a communication standard, such as WiFi (Wireless-Fidelity). In one exemplary embodiment, the communication component 116 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 116 further includes a near field communication (Near Field Communication, NFC) module for facilitating short range communications. For example, the NFC module may be implemented based on radio frequency identification (Radio Frequency Identification, RFID) technology, infrared data association (Infrared Data Association, irDA) technology, ultra Wideband (UWB) technology, bluetooth technology, and other technologies.
In an exemplary embodiment, the apparatus 100 may be implemented by one or more application specific integrated circuits (Application Specific Integrated Circuit, abbreviated ASIC), digital signal processors, digital signal processing devices, programmable logic devices, field programmable gate arrays, controllers, microcontrollers, microprocessors or other electronic components for executing the methods described below.
FIG. 2 is a flow chart illustrating a method of intelligent orchestration of distributed transactions according to an example embodiment. As shown in fig. 2, the method includes the following steps.
In step 210, the server obtains transaction inflow information of a node and transaction outflow information of the node.
Wherein the transaction inflow information may include transaction data of all first transactions flowing into the node, and the transaction outflow information may include transaction data of all second transactions flowing out of the node; alternatively, the transaction data of the transaction may carry a transaction identification number (ID) of the transaction, i.e. the transaction data of the first transaction carries the transaction identification number of the first transaction and the transaction data of the second transaction carries the transaction identification number of the second transaction.
Step 220, the server determines the current processing load of the node according to the transaction inflow information and the transaction outflow information.
The transaction inflow information may further include a transaction inflow log, where the transaction inflow log carries a transaction identification number of a first transaction, and the transaction outflow information further includes a transaction outflow log, where the transaction outflow log carries a transaction identification number of a second transaction. In an exemplary embodiment, determining, by the server, the current processing load of the node according to the transaction inflow information and the transaction outflow information may include:
and the server calculates the difference between the number of the transaction data of the first transaction flowing into the node and the number of the transaction data of the second transaction flowing out of the node as the current processing load of the node according to the transaction identification number carried by the transaction inflow log and the transaction identification number carried by the transaction outflow log.
Step 230, the server judges whether the current processing load is greater than a processing load threshold, and if the current processing load is greater than the processing load threshold, the step 240 is triggered and executed; and if the current processing load is not greater than the processing load threshold, ending the flow.
In an exemplary embodiment, optionally, the server may receive the maximum processing load reported by each node, and set a processing load threshold corresponding to each node according to the maximum processing load reported by the node, so as to improve flexibility of allocating transaction processing work.
In step 240, the server rolls out the transaction data of at least one target transaction on the node to the replacement node.
Wherein the target transaction is at least one transaction of all transactions except the second transaction in the first transaction on the node.
In this exemplary embodiment, for a certain node, the first transaction may represent a transaction in which the history statistics have flowed into the node so far, and the second transaction may represent a transaction in which the history statistics have flowed out of the node so far, and then the target transaction may be at least one transaction of all transactions except the second transaction in the first transaction in which the node has flowed; the history statistics are used to represent a target period, where the target period uses a time when the transaction inflow information of the node includes the transaction data of the first transaction as an initial time and uses a time when the server obtains the transaction inflow information and the transaction outflow information of the node as a deadline time. For example, the total number of first transactions is 50 and the total number of second transactions is 20, then the number of target transactions may be 1, 2 or 10, and the number of target transactions does not exceed 30 at the maximum.
In an exemplary embodiment, optionally, for a certain node, the first transaction may further represent a transaction flowing into the node at the current time, the second transaction may represent a transaction flowing out of the node at the current time, the history existing transaction may represent a transaction already located at the node at the current time, and then the target transaction may be at least one transaction of the first transaction and all transactions except the second transaction in all the history existing transactions of the node; the current time is the time when the server obtains the transaction inflow information and the transaction outflow information of the node in this embodiment. For example, the total number of first transactions is 15, the total number of second transactions is 20, and the total number of history existing transactions is 35, then the number of target transactions may be 1, 2, or 10, and the number of target transactions is at most not more than 30.
In another exemplary embodiment, the server transferring transaction data of at least one target transaction on the node out to the replacement node may include:
the server determines the current transaction of the node, wherein the current transaction is all transactions except the second transaction in the first transaction on the node;
The server determines at least one project label transaction from the current transaction;
the server queries a preset transaction type and node comparison table to find out a replacement node with the same transaction type as that of the target transaction;
the server forwards the transaction data of the at least one project label transaction to the replacement node so that the replacement node completes processing of the transaction data of the at least one project label transaction.
In the present exemplary embodiment, a replacement node having the same transaction type as that of the target transaction means that the replacement node is capable of processing the same transaction as that of the target transaction. The server can search a plurality of replacement nodes with the same transaction type as the target transaction by searching a preset transaction type and node comparison table, determine the current processing load of each replacement node according to the transaction inflow information and the transaction outflow information of each replacement node in the plurality of replacement nodes, select the target replacement node with the minimum current processing load in the plurality of replacement nodes, and transfer the transaction data of at least one item target transaction in the nodes to the target replacement node so that the target replacement node can complete the processing of the transaction data of the target transaction. For example, the target transaction may be a risk assessment, which may be handled by a risk assessment company, the total number of risk assessment companies being 20; the type of the risk assessment can be the risk assessment of the traffic claim or the risk assessment of the disease claim, and if the transaction type of the target transaction is the risk assessment of the traffic claim, the nodes and the replacement nodes are risk assessment companies capable of processing the risk assessment of the traffic claim; then, the server determines 12 risk assessment companies capable of processing risk assessments of traffic claims from 20 risk assessment companies by querying a preset transaction type and node comparison table, determines 1 target risk assessment company with the minimum current processing load from the 12 risk assessment companies, and transfers risk assessment data of the traffic claims on a certain risk assessment company (node) to the target risk assessment company so as to enable the target risk assessment company to process the risk assessment data.
Therefore, by implementing the intelligent allocation method of distributed transaction processing described in fig. 2, the processing load of each node can be monitored in real time, and when a large amount of transaction data of transactions are monitored to be blocked at a certain node, the transaction data of a target transaction exceeding the processing load of the node is transferred to a replacement node with the same type as the node, so that the processing loads of the nodes with the same type are reasonably balanced, and the current situation of transaction processing of lengthy tows is improved.
FIG. 3 is a flow chart illustrating a method of intelligent orchestration of distributed transactions according to another example embodiment. As shown in fig. 3, the image control method in this embodiment further includes the following steps, in addition to the steps shown in fig. 2, after determining that the current processing load is not greater than the processing load threshold in step 230.
The server inputs the current processing load into the processing load prediction model, step 250.
In an exemplary embodiment, the method of FIG. 4 may further include the following steps prior to step 250:
the server obtains the historical processing load of the node;
the server trains an initial neural network model by utilizing the historical processing load to obtain a processing load prediction model.
Further optionally, the server trains the initial neural network model with the historical processing load, and obtaining the processing load prediction model may include:
the server acquires N continuous processing loads from the historical processing loads as samples to be input into the initial neural network model, so that the initial neural network model outputs output loads corresponding to the samples; n is a positive integer;
the server determines the output load as a target processing load;
the server compares the target processing load with the actual processing load, and updates the parameters of the initial neural network model according to the comparison result, wherein the actual processing load is the processing load detected by the server from the node at the moment of acquiring the target processing load;
the server judges whether a loss function of the initial neural network model meets a preset condition or not, wherein the loss function is used for representing an error between an output load and an actual value of the initial neural network model;
if the loss function of the initial neural network model meets the preset condition, the server determines the current parameters of the initial neural network model as parameters of the processing load prediction model, and obtains the processing load prediction model according to the parameters.
In the present exemplary embodiment, the processing load prediction model may be used to predict the processing load (predicted processing load) of each node in the following period, wherein the predicted period may be determined according to the training content of the processing load model. For example, the server may set the duration of each period to 1 hour, with the intermediate time of each 1 hour as the prescribed time. For any node, acquiring the historical processing load of any node, and arbitrarily selecting the historical processing load corresponding to N continuous specified moments (such as five-point three-ten-six-point three-ten-seven-point thirty-minute) from the historical processing load, wherein N is a positive integer; the historical processing loads corresponding to the N predetermined times are input as samples to an initial neural network model (machine learning model), that is, the input vector is x= (x 1, x2, x3, x4, … xn), and the processing load corresponding to the n+1st predetermined time after the N consecutive predetermined times is output as a desired output; and continuing to execute the operation, namely arbitrarily selecting the historical processing loads corresponding to other N continuous preset moments from the historical processing loads as samples and inputting the samples into the initial neural network model so that the initial neural network model outputs the predicted processing loads (expected output) corresponding to the (n+1) th preset moment after the N continuous preset moments until the loss function of the initial neural network model meets the preset condition. The server may further obtain an actual processing load (actual output) of the server at the n+1th prescribed time from any node after obtaining a predicted processing load (expected output) corresponding to the n+1th prescribed time output by the initial neural network model, and determine an output error of the initial neural network model according to the predicted processing load and the actual processing load, where the output error is an error of an output layer of the initial neural network model, an error of a direct leading layer of the output layer may be determined according to the error of the output layer, and an error of a further previous layer may be determined according to the error of the leading layer of the output layer, so that an inverse layer-by-layer determination is performed from the output layer to the input layer, and errors of all layers in the initial neural network model may be determined, and a loss function of the initial neural network model may be determined according to the errors of all layers in the initial neural network model. The initial neural network model can be continuously learned by repeatedly inputting the sample into the initial neural network model, and the learning mode is to continuously change the network connection weight of the initial neural network model under the stimulation of the external input sample so as to ensure that the loss function of the initial neural network model meets the preset condition. When the loss function of the initial network model meets the preset condition, the initial network model training can be determined to be completed, the server determines the current parameters of the initial network model as parameters of the processing load prediction model, and the processing load prediction model is obtained according to the parameters.
Step 260, the server determines the predicted processing load of the node based on the output result of the processing load prediction model.
Step 270, the server judges whether the predicted processing load is greater than a processing load threshold, and if the predicted processing load is greater than the processing load threshold, triggers execution of step 240; if the predicted processing load is not greater than the processing load threshold, the process is ended.
The transaction data of the transaction may be processed immediately after flowing into the node or may not be processed immediately, so that the predicted processing load of the node is predicted based on the processing load prediction model, the change condition of the processing load of the node can be known in advance, the transaction data of the node is reasonably allocated, and the balance effect of the processing load is optimized; in addition, the processing time of the transaction data of the transaction after flowing into the node can be according to the current processing speed of the node and the ordering of the transaction data of the transaction at the node, wherein the ordering is the sequence of the transaction data of the transaction flowing into the node.
In another exemplary embodiment, after step 240, the image control method in this embodiment further includes the following steps.
In step 280, the server obtains the flow direction of the target transaction.
In step 290, the server updates the flow record of the target transaction according to the flow direction of the target transaction, where the flow record is used to track the circulation path of the statistics target transaction.
Therefore, by implementing the intelligent allocation method of distributed transaction processing described in fig. 3, the processing load of each node can be monitored in real time, and when a large amount of transaction data of transactions are monitored to be blocked at a certain node, the transaction data of a target transaction exceeding the processing load of the node is transferred to a replacement node with the same type as the node, so that the processing loads of the nodes with the same type are reasonably balanced, and the current situation of transaction processing of redundant tows is improved; in addition, the predicted processing load of the node is predicted based on the processing load prediction model, so that the change condition of the processing load of the node can be obtained in advance, reasonable allocation of transaction data of the node is facilitated, and the balancing effect of the processing load is optimized.
The following are device embodiments of the present invention.
Fig. 4 is a block diagram of a server, according to an example embodiment. As shown in fig. 4, the server includes:
the acquiring module 310 is configured to acquire transaction inflow information of a certain node and transaction outflow information of the certain node, and provide the transaction inflow information and the transaction outflow information to the first determining module 320.
Wherein the transaction inflow information includes transaction data of all first transactions flowing into the node, and the transaction outflow information includes transaction data of all second transactions flowing out of the node.
Alternatively, the transaction data of the transaction may carry a transaction identification number (ID) of the transaction, i.e. the transaction data of the first transaction carries the transaction identification number of the first transaction and the transaction data of the second transaction carries the transaction identification number of the second transaction.
The first determining module 320 is configured to determine a current processing load of the node according to the transaction inflow information and the transaction outflow information, and provide the current processing load to the judging module 330.
The judging module 330 is configured to judge whether the current processing load is greater than the processing load threshold, and provide the judgment result to the transferring module 340.
In an exemplary embodiment, optionally, before determining whether the current processing load is greater than the processing load threshold, the determining module 330 may further receive a maximum processing load reported by each node, and set the processing load threshold corresponding to each node according to the maximum processing load reported by the node, so that allocation of the transaction is more flexible.
A transferring module 340, configured to transfer transaction data of at least one project label transaction on the node to the replacement node when the determining module 330 determines that the current processing load is greater than the processing load threshold; wherein the target transaction is at least one of all transactions except the second transaction in the first transaction on the node.
In this exemplary embodiment, for a certain node, the first transaction may represent a transaction in which the history statistics have flowed into the node so far, and the second transaction may represent a transaction in which the history statistics have flowed out of the node so far, and then the target transaction may be at least one transaction of all transactions except the second transaction in the first transaction in which the node has flowed; the history statistics are used to represent a target period, where the target period uses a time when the transaction inflow information of the node includes the transaction data of the first transaction as an initial time and uses a time when the server obtains the transaction inflow information and the transaction outflow information of the node as a deadline time. For example, the total number of first transactions is 50 and the total number of second transactions is 20, then the number of target transactions may be 1, 2 or 10, and the number of target transactions does not exceed 30 at the maximum.
In an exemplary embodiment, optionally, for a certain node, the first transaction may further represent a transaction flowing into the node at the current time, the second transaction may represent a transaction flowing out of the node at the current time, the history existing transaction may represent a transaction already located at the node at the current time, and then the target transaction may be at least one transaction of the first transaction and all transactions except the second transaction in all the history existing transactions of the node; the current time is the time when the server obtains the transaction inflow information and the transaction outflow information of the node in this embodiment. For example, the total number of first transactions is 15, the total number of second transactions is 20, and the total number of history existing transactions is 35, then the number of target transactions may be 1, 2, or 10, and the number of target transactions is at most not more than 30.
Therefore, when the server described in fig. 4 is implemented to monitor the processing load of each node in real time and it is monitored that the transaction data of a large number of transactions are blocked at a certain node, the transaction data of the target transaction exceeding the processing load of the node is transferred to the replacement node, so that the processing load of the node is reasonably balanced, and the current situation of lengthy transaction processing is improved.
Referring to fig. 5, fig. 5 is a block diagram of another server according to an exemplary embodiment, where the server shown in fig. 5 is further optimized by the server shown in fig. 4. In comparison with the server shown in fig. 4, the server shown in fig. 5 further includes:
an input module 350, configured to input the current processing load into the processing load prediction model and trigger the second determining module 360 to start when the determining module 330 determines that the current processing load is not greater than the processing load threshold.
Wherein the determining module 330 may provide the determination result to the transferring module 340 and the input module 350 after determining whether the current processing load is greater than the processing load threshold.
The second determining module 360 is configured to determine a predicted processing load of the node based on an output result of the processing load prediction model, and provide the predicted processing load to the judging module 330.
The above-mentioned judging module 330 is further configured to judge whether the predicted processing load is greater than the processing load threshold, and provide the judgment result to the transferring module 340.
The above-mentioned transferring module 340 is further configured to transfer the transaction data of at least one target transaction on the node to a replacement node with the same type as the node when the judging module 330 judges that the predicted processing load is greater than the processing load threshold.
In an exemplary embodiment, as shown in fig. 5, the obtaining module 310 may be further configured to obtain a historical processing load of the node, and provide the historical processing load to the training module 370.
The training module 370 is configured to train the initial neural network model by using the historical processing load to obtain a processing load prediction model.
The input module 350 is specifically configured to input the current processing load into the processing load prediction model after the training module 370 trains the initial neural network model with the historical processing load to obtain the processing load prediction model, and when the judging module 330 judges that the current processing load is not greater than the processing load threshold.
Further optionally, the training module 370 trains the initial neural network model by using the historical processing load, and the manner of obtaining the processing load prediction model may specifically be:
N continuous processing loads are obtained from the historical processing loads and used as samples to be input into the initial neural network model, so that the initial neural network model outputs output loads corresponding to the samples; n is a positive integer;
determining the output load as a target processing load;
comparing the target processing load with an actual processing load, and updating parameters of an initial neural network model according to a comparison result, wherein the actual processing load is the processing load detected by a server from the node at the moment of acquiring the target processing load;
judging whether a loss function of the initial neural network model meets a preset condition or not, wherein the loss function is used for representing an error between an output load and an actual value of the initial neural network model;
if the loss function of the initial neural network model meets the preset condition, determining the current parameters of the initial neural network model as parameters of the processing load prediction model, and obtaining the processing load prediction model according to the parameters.
In the present exemplary embodiment, the processing load prediction model may be used to predict the processing load (predicted processing load) of each node in the following period, wherein the predicted period may be determined according to the training content of the processing load model. For example, the training module 370 may set the duration of each period to 1 hour, with the intermediate time of each 1 hour as the prescribed time. For any node, acquiring the historical processing load of any node, and arbitrarily selecting the historical processing load corresponding to N continuous specified moments (such as five-point three-ten-six-point three-ten-seven-point thirty-minute) from the historical processing load, wherein N is a positive integer; the historical processing loads corresponding to the N predetermined times are input as samples to an initial neural network model (machine learning model), that is, the input vector is x= (x 1, x2, x3, x4, … xn), and the processing load corresponding to the n+1st predetermined time after the N consecutive predetermined times is output as a desired output; and continuing to execute the operation, namely arbitrarily selecting the historical processing loads corresponding to other N continuous preset moments from the historical processing loads as samples and inputting the samples into the initial neural network model so that the initial neural network model outputs the predicted processing loads (expected output) corresponding to the (n+1) th preset moment after the N continuous preset moments until the loss function of the initial neural network model meets the preset condition. The training module 370 may further obtain an actual processing load (actual output) of the initial neural network model at the n+1th prescribed time from any node after obtaining a predicted processing load (expected output) corresponding to the n+1th prescribed time output by the initial neural network model, and determine an output error of the initial neural network model according to the predicted processing load and the actual processing load, where the output error is an error of an output layer of the initial neural network model, an error of a direct preceding layer of the output layer may be determined according to the error of the output layer, and an error of a further preceding layer may be determined according to the error of the preceding layer of the output layer, so that an inverse layer-by-layer calculation is performed from the output layer to the input layer, and errors of all layers in the initial neural network model may be determined, and a loss function of the initial neural network model may be determined according to the errors of all layers in the initial neural network model. The initial neural network model can be continuously learned by repeatedly inputting the sample into the initial neural network model, and the learning mode is to continuously change the network connection weight of the initial neural network model under the stimulation of the external input sample so as to ensure that the loss function of the initial neural network model meets the preset condition. When the loss function of the initial network model meets the preset condition, the training module 370 may determine that the training of the initial network model is completed, determine the current parameter of the initial network model as the parameter of the processing load prediction model, and obtain the processing load prediction model according to the parameter.
Therefore, when the server described in fig. 5 is implemented to monitor the processing load of each node in real time and detect that the transaction data of a large number of transactions are blocked at a certain node, the transaction data of the target transaction exceeding the processing load of the node is transferred to a replacement node with the same type as the node, so that the processing load of the nodes of the same type is reasonably balanced, and the current situation of transaction processing of redundant tows is improved; in addition, the predicted processing load of the node is predicted based on the processing load prediction model, so that the change condition of the processing load of the node can be obtained in advance, reasonable allocation of transaction data of the node is facilitated, and the balancing effect of the processing load is optimized.
Referring to fig. 6, fig. 6 is a block diagram of yet another server according to an exemplary embodiment, wherein the server shown in fig. 6 is further optimized by the server shown in fig. 5. In the server shown in fig. 6, compared with the server shown in fig. 5:
the manner in which the first determining module 320 determines the current processing load of the node according to the transaction inflow information and the transaction outflow information may specifically be:
the server determines the current transaction of the node according to the transaction identification number carried by the transaction inflow log and the transaction identification number carried by the transaction outflow log; wherein the current transaction is all transactions except the second transaction in the first transaction on the node; the transaction data of the current transaction is taken as the current processing load of the node.
The transaction inflow information further comprises a transaction inflow log, wherein the transaction inflow log carries a transaction identification number of a first transaction, and the transaction outflow information further comprises a transaction outflow log, and the transaction outflow log carries a transaction identification number of a second transaction.
In an exemplary embodiment, as shown in FIG. 6, the transaction inflow log may also carry the transaction type of a first transaction, and the transaction outflow log may also carry the transaction type of a second transaction; the transferring module 340 may specifically transfer the transaction data of at least one item target transaction on the node to the replacement node by:
determining a current transaction of the node, and determining at least one item mark transaction from the current transaction, wherein the current transaction of the node is determined, and the current transaction is all transactions except a second transaction in a first transaction on the node;
inquiring a preset transaction type and node comparison table to find out a replacement node with the same transaction type as that of the target transaction;
and transferring the transaction data of the at least one project label transaction out to the replacement node so that the replacement node finishes processing the transaction data of the at least one project label transaction.
In another exemplary embodiment, as shown in fig. 6, the obtaining module 310 is further configured to obtain the flow direction of the target transaction after the transferring module 340 transfers the transaction data of at least one target transaction on the node to the replacement node, and provide the flow direction of the target transaction to the updating module 380.
The updating module 380 is configured to update a flow record of the target transaction according to the flow direction of the target transaction, where the flow record is used to track a circulation path of the statistics target transaction.
The server can update the flow record of the target transaction according to the flow direction of the transaction, and can realize global monitoring on the circulation path of the transaction data flow direction node of the target transaction; in addition, after receiving the query request sent by the device where the user is located, the server can immediately respond to the query request of the user according to the flow record of the target transaction updated at the moment of receiving the query request, that is, send the flow record of the target transaction to the device where the user is located, so that the response speed can be improved, and the interactive experience of the user is further improved.
Therefore, when the server described in fig. 6 is implemented to monitor the processing load of each node in real time and detect that the transaction data of a large number of transactions are blocked at a certain node, the transaction data of the target transaction exceeding the processing load of the node is transferred to a replacement node with the same type as the node, so that the processing load of the nodes of the same type is reasonably balanced, and the current situation of transaction processing of redundant tows is improved; the change condition of the processing load of the node can be known in advance, so that reasonable allocation of the transaction data of the node is facilitated, and the balance effect of the processing load is optimized; in addition, the response speed can be improved, and the interaction experience of the user is further improved.
The invention also provides an electronic device, comprising:
a processor;
and a memory having stored thereon computer readable instructions which, when executed by the processor, implement the intelligent deployment method of distributed transactions as previously described.
The electronic device may be the apparatus 100 shown in fig. 1.
In an exemplary embodiment, the present invention also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a distributed transaction intelligent provisioning method as previously shown.
It is to be understood that the invention is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (8)

1. An intelligent deployment method for distributed transaction processing, which is characterized by comprising the following steps:
the method comprises the steps that a server obtains transaction inflow information of a certain node and transaction outflow information of the node; the transaction inflow information comprises transaction data of all first transactions flowing into the node and transaction inflow logs carrying transaction identification numbers of the first transactions, and the transaction outflow information comprises transaction data of all second transactions flowing out of the node and transaction outflow logs carrying transaction identification numbers of the second transactions;
The server calculates the difference between the number of the transaction data of the first transaction flowing into the node and the number of the transaction data of the second transaction flowing out of the node as the current processing load of the node according to the transaction identification number of the first transaction carried by the transaction inflow log and the transaction identification number of the second transaction carried by the transaction outflow log;
the server judges whether the current processing load is larger than a processing load threshold value; the processing load threshold is the maximum processing load reported by the node;
if the current processing load is greater than the processing load threshold, the server forwards transaction data of at least one item target transaction on the node to a replacement node, wherein the target transaction is at least one transaction in all transactions except the second transaction in a first transaction on the node;
if the current processing load is not greater than the processing load threshold, the server inputs the current processing load to a processing load prediction model; the server determines the predicted processing load of the node based on the output result of the processing load prediction model; the server judges whether the predicted processing load is greater than the processing load threshold; and if the predicted processing load is greater than the processing load threshold, the server executes the transferring of the transaction data of at least one project label transaction on the node to a replacement node.
2. The method of claim 1, wherein the transaction inflow log further carries a transaction type of the first transaction, wherein the transaction outflow log further carries a transaction type of the second transaction, and wherein the server rolls transaction data of at least one target transaction on the node out to an alternate node if the current processing load is greater than the processing load threshold, comprising:
the server determines at least one project label transaction from the current transaction;
the server queries a preset transaction type and node comparison table to find out a replacement node with the same transaction type as the target transaction;
the server transfers the transaction data of the at least one project label transaction out to the replacement node so that the replacement node completes processing of the transaction data of the at least one project label transaction.
3. The method according to any one of claims 1-2, wherein after the server rolls out transaction data of at least one project label transaction on the node to a replacement node, the method further comprises:
the server acquires the flow direction of the target transaction;
And the server updates a flow record of the target transaction according to the flow direction of the target transaction, wherein the flow record is used for tracking and counting the circulation path of the target transaction.
4. The method of claim 1, wherein if the current processing load is not greater than the processing load threshold, the server is further configured to:
the server acquires the historical processing load of the node;
and the server trains an initial neural network model by utilizing the historical processing load to obtain the processing load prediction model.
5. The method of claim 4, wherein the server trains an initial neural network model using the historical processing load to derive the processing load prediction model, comprising:
the server acquires N continuous processing loads from the historical processing loads as samples to be input into an initial neural network model, so that the initial neural network model outputs output information corresponding to the samples; the N is a positive integer;
the server determining the output load as a target processing load;
The server compares the target processing load with an actual processing load, and updates parameters of the initial neural network model according to a comparison result, wherein the actual processing load is the processing load detected by the server from the node at the moment of acquiring the target processing load;
the server judges whether a loss function of the initial neural network model meets a preset condition or not, wherein the loss function is used for representing an error between an output load and an actual value of the initial neural network model;
and if the loss function of the initial neural network model meets the preset condition, the server determines the current parameter of the initial neural network model as the parameter of a processing load prediction model, and obtains the processing load prediction model according to the parameter.
6. A server, the server comprising:
the acquisition module is used for acquiring transaction inflow information of a certain node and transaction outflow information of the node; the transaction inflow information comprises transaction data of all first transactions flowing into the node and transaction inflow logs carrying transaction identification numbers of the first transactions, and the transaction outflow information comprises transaction data of all second transactions flowing out of the node and transaction outflow logs carrying transaction identification numbers of the second transactions;
A first determining module, configured to calculate, according to a transaction identifier of the first transaction carried by the transaction inflow log and a transaction identifier of the second transaction carried by the transaction outflow log, a difference between a number of transaction data of the first transaction flowing into the node and a number of transaction data of the second transaction flowing out of the node, as a current processing load of the node;
the judging module is used for judging whether the current processing load is larger than a processing load threshold value or not;
the transfer module is used for transferring the transaction data of at least one item mark transaction on the node to a replacement node when the judging module judges that the current processing load is larger than the processing load threshold, wherein the target transaction is at least one transaction of all transactions except the second transaction in a first transaction on the node; if the current processing load is not greater than the processing load threshold, the server inputs the current processing load to a processing load prediction model; the server determines the predicted processing load of the node based on the output result of the processing load prediction model; the server judges whether the predicted processing load is greater than the processing load threshold; and if the predicted processing load is greater than the processing load threshold, the server executes the transferring of the transaction data of at least one project label transaction on the node to a replacement node.
7. A computer readable storage medium, characterized in that it stores a computer program that causes a computer to execute the intelligent deployment method of distributed transactions according to any one of claims 1 to 5.
8. An electronic device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1-5 when the computer program is executed.
CN201811191526.7A 2018-10-12 2018-10-12 Intelligent allocation method for distributed transaction processing and server Active CN109523123B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811191526.7A CN109523123B (en) 2018-10-12 2018-10-12 Intelligent allocation method for distributed transaction processing and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811191526.7A CN109523123B (en) 2018-10-12 2018-10-12 Intelligent allocation method for distributed transaction processing and server

Publications (2)

Publication Number Publication Date
CN109523123A CN109523123A (en) 2019-03-26
CN109523123B true CN109523123B (en) 2024-04-05

Family

ID=65771813

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811191526.7A Active CN109523123B (en) 2018-10-12 2018-10-12 Intelligent allocation method for distributed transaction processing and server

Country Status (1)

Country Link
CN (1) CN109523123B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20000073142A (en) * 1999-05-06 2000-12-05 서평원 Apparatus And Method For Overload State Sensing Of Message
CN102055675A (en) * 2011-01-21 2011-05-11 清华大学 Multipath routing distribution method based on load equilibrium
CN104580396A (en) * 2014-12-19 2015-04-29 华为技术有限公司 Task scheduling method, node and system
CN105991458A (en) * 2015-02-02 2016-10-05 中兴通讯股份有限公司 Load balancing method and load balancing device
CN106101232A (en) * 2016-06-16 2016-11-09 北京思源置地科技有限公司 Load-balancing method and device
CN106302161A (en) * 2016-08-01 2017-01-04 广东工业大学 Perception data transmission method based on load estimation, device, path control deivce
CN106484530A (en) * 2016-09-05 2017-03-08 努比亚技术有限公司 A kind of distributed task dispatching O&M monitoring system and method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20000073142A (en) * 1999-05-06 2000-12-05 서평원 Apparatus And Method For Overload State Sensing Of Message
CN102055675A (en) * 2011-01-21 2011-05-11 清华大学 Multipath routing distribution method based on load equilibrium
CN104580396A (en) * 2014-12-19 2015-04-29 华为技术有限公司 Task scheduling method, node and system
CN105991458A (en) * 2015-02-02 2016-10-05 中兴通讯股份有限公司 Load balancing method and load balancing device
CN106101232A (en) * 2016-06-16 2016-11-09 北京思源置地科技有限公司 Load-balancing method and device
CN106302161A (en) * 2016-08-01 2017-01-04 广东工业大学 Perception data transmission method based on load estimation, device, path control deivce
CN106484530A (en) * 2016-09-05 2017-03-08 努比亚技术有限公司 A kind of distributed task dispatching O&M monitoring system and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
支持动态修改的MINI工作流过程元模型;齐璇;;计算机工程与科学;20070715(第07期);第138-140页 *
陈世浩等.基于预测的计算网格负载平衡研究.航空计算技术.2006,第26卷(第02期),第82-85页. *

Also Published As

Publication number Publication date
CN109523123A (en) 2019-03-26

Similar Documents

Publication Publication Date Title
CN110414731B (en) Order distribution method and device, computer readable storage medium and electronic equipment
CN106294614A (en) Method and apparatus for access service
CN109509017B (en) User retention prediction method and device based on big data analysis
CN109087090A (en) Target is tracked using account book trusty
CN109684554A (en) The determination method and news push method of the potential user of news
CN107665150A (en) Event Service modeling framework for computer system
US11593735B2 (en) Automated and efficient personal transportation vehicle sharing
CN110018920A (en) A kind of detection method, device and the equipment of page presentation exception
CN110162442A (en) A kind of system performance bottleneck localization method and system
CN110162464A (en) Mcok test method and system, electronic equipment and readable storage medium storing program for executing
US9064286B2 (en) Social network service providing system and method for setting relationship between users based on motion of mobile terminal and information about time
CN109614092A (en) Atomic service method of combination and device, electronic equipment based on micro services framework
CN107423220A (en) The detection method and device of RAM leakage, electronic equipment
CN110119354A (en) Method for testing software, device and electronic equipment based on Test cases technology
CN110428120A (en) Real-time personal mobility planning system
CN109766247B (en) Alarm setting method and system based on system data monitoring
Sun et al. On the tradeoff between sensitivity and specificity in bus bunching prediction
CN109523123B (en) Intelligent allocation method for distributed transaction processing and server
US20230267400A1 (en) Artificially intelligent warehouse management system
CN107609810A (en) Article scheduling method for tracing, device, dispatch terminal and server
CN110716914A (en) Database configuration method, system, computer readable storage medium and terminal equipment
CN109472546A (en) A kind of intelligent control method and server of distributing real time system
US20220374341A1 (en) Techniques for decoupled management of software test execution planning and corresponding software test execution runs
US20220374342A1 (en) Techniques for decoupled management of software test execution planning and corresponding software test execution runs
US20210097413A1 (en) Predictive Readiness and Accountability Management

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant