CN111580939B - Method and device for processing transactions in hierarchical and asynchronous mode - Google Patents

Method and device for processing transactions in hierarchical and asynchronous mode Download PDF

Info

Publication number
CN111580939B
CN111580939B CN202010249946.7A CN202010249946A CN111580939B CN 111580939 B CN111580939 B CN 111580939B CN 202010249946 A CN202010249946 A CN 202010249946A CN 111580939 B CN111580939 B CN 111580939B
Authority
CN
China
Prior art keywords
transaction
queue
subtasks
subtask
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010249946.7A
Other languages
Chinese (zh)
Other versions
CN111580939A (en
Inventor
李传松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weimeng Chuangke Network Technology China Co Ltd
Original Assignee
Weimeng Chuangke Network Technology China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Weimeng Chuangke Network Technology China Co Ltd filed Critical Weimeng Chuangke Network Technology China Co Ltd
Priority to CN202010249946.7A priority Critical patent/CN111580939B/en
Publication of CN111580939A publication Critical patent/CN111580939A/en
Application granted granted Critical
Publication of CN111580939B publication Critical patent/CN111580939B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/466Transaction processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the application provides a method and a device for processing transactions in a hierarchical and asynchronous manner, wherein the method comprises the following steps: dividing the transaction into a plurality of subtasks according to the internal business logic relation of the transaction, marking the subtasks which have priority processing marks and need to be processed before according to the internal business logic relation, which have priority processing marks, in a front queue, and marking other subtasks in a rear queue; executing subtasks in the front-end queue of the transaction; according to the internal business logic relation of the transaction, parameters generated by executing the subtasks in the front-end queue are transmitted to other associated subtasks; executing the subtasks in the post-queue of the transaction after the subtasks in the pre-queue are processed; and feeding back and displaying the execution results of all the subtasks of the transaction. The subtasks with high importance can be preferentially executed, so that the processing period of the whole transaction is reduced, and the processing efficiency of the transaction is improved.

Description

Method and device for processing transactions in hierarchical and asynchronous mode
Technical Field
The application relates to the field of multi-task transaction processing, in particular to a method and a device for processing transactions asynchronously in a grading manner.
Background
In the micro public welfare project, after the user initiates donation and successful payment, a successful payment transaction is started. The whole transaction is divided into updating project donation information, updating personal donation information, updating list information, sharing microblogs, notifying persons to whom the project belongs and the donators of private information respectively, recording running water, updating order information and checking payment information. And after the callback is paid, the callback enters a transaction queue, and a processing queue task receives the queue to start processing of a transaction. The method or call interface of each class is executed in business order and dependency logic organization program code execution order. And when the execution or the calling of a certain sub-module fails, stopping the transaction to carry out the next queue transaction, if the execution is finished normally, finally executing task checking and checking, if the record remarks of the abnormal records exist, and if the checking and checking are finished, completing the execution of the transaction.
In carrying out the present application, the applicant has found that at least the following problems exist in the prior art:
a strongly consistent dependent sequential execution is required during the entire execution, and once one step fails, the entire transaction aborts or is considered to fail. All subtasks are run through in one execution flow, so that the whole transaction processing period is longer, and the following transaction processing is affected. Thereby causing queue accumulation and abnormal traffic.
Disclosure of Invention
The embodiment of the application provides a method and a device for processing transactions in a hierarchical and asynchronous manner, which are used for adjusting the processing priority of a subtask module in the process of executing the transactions, and can be preferentially executed with high importance and can be used for carrying out asynchronous delay processing again for some delay processing. Thereby reducing the processing period of the whole transaction and improving the processing efficiency of the following transaction. Ensuring normal processing of the transaction queue to avoid accumulation.
To achieve the above object, in one aspect, an embodiment of the present application provides a method for hierarchical asynchronous transaction, including:
dividing a transaction into a plurality of subtasks according to the internal business logic relation of the transaction, marking the subtasks which have priority processing marks and need to be processed before according to the internal business logic relation, which have priority processing marks, in a front queue, and marking other subtasks of the transaction in a rear queue;
traversing the subtasks of the transaction according to the task list, and executing the subtasks in the front queue of the transaction; according to the internal business logic relation of the transaction, parameters generated by executing the subtasks in the front-end queue are transmitted to other associated subtasks;
after the subtasks in the front queue of the transaction are processed, the subtasks in the rear queue of the transaction are executed by combining parameters generated by the subtasks in the front queue;
and feeding back and displaying the execution results of all the subtasks of the transaction.
In another aspect, an embodiment of the present application further provides an apparatus for hierarchical asynchronous transaction processing, including:
the subtask marking module is used for dividing the transaction into a plurality of subtasks according to the internal business logic relation of the transaction, marking the subtasks which have priority processing marks and need to be processed before according to the internal business logic relation, which have priority processing marks, in a front queue, and marking other subtasks of the transaction in a rear queue;
the subtask priority processing module is used for traversing the subtasks of the transaction according to the task list and executing the subtasks in the front queue of the transaction; according to the internal business logic relation of the transaction, parameters generated by executing the subtasks in the front-end queue are transmitted to other associated subtasks;
the subtask delay processing module is used for executing the subtasks in the post-queue of the transaction by combining the parameters generated by the execution of the subtask lines in the pre-queue after the subtasks in the pre-queue of the transaction are processed;
and the transaction result feedback module is used for feeding back and displaying the execution results of all the subtasks of the transaction.
The technical scheme has the following beneficial effects: by splitting a business logic relationship into a plurality of subtasks by a transaction, the processing priority of the subtask module is adjusted in the process of executing the transaction under high load, the execution can be preferentially carried out with high importance, and asynchronous delay processing can be carried out again for some delay processing. Therefore, the processing period of the whole transaction is reduced, and normal processing of the transaction queue is ensured to avoid accumulation.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a hierarchical asynchronous transaction method implemented by the present application;
FIG. 2 is a block diagram of a hierarchical asynchronous transaction device embodying the present application;
FIG. 3 is a schematic diagram of a hierarchical asynchronous transaction flow implemented in accordance with the present application;
FIG. 4 is a flow chart of hierarchical asynchronous transaction setup logic implemented by the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
As shown in fig. 1, in connection with an embodiment of the present application, there is provided a method of hierarchical asynchronously processing transactions, including:
s101: dividing a transaction into a plurality of subtasks according to the internal business logic relation of the transaction, marking the subtasks which have priority processing marks and need to be processed before according to the internal business logic relation, which have priority processing marks, in a front queue, and marking other subtasks of the transaction in a rear queue;
s102: traversing the subtasks of the transaction according to the task list, and executing the subtasks in the front queue of the transaction; according to the internal business logic relation of the transaction, parameters generated by executing the subtasks in the front-end queue are transmitted to other associated subtasks;
s103: after the subtasks in the front queue of the transaction are processed, the subtasks in the rear queue of the transaction are executed by combining parameters generated by the subtasks in the front queue;
s104: and feeding back and displaying the execution results of all the subtasks of the transaction.
Preferably, step 101 specifically includes:
differentiating internal business logic relations of subtasks of the transaction through a configuration file of the transaction, and marking the subtasks in a front queue or a rear queue through subtask attributes in the configuration file; the subtask attributes in the configuration file include: the processing level of each subtask marks the sequential processing order relation of each subtask and other associated subtasks according to the internal logic relation of the business.
Preferably, in step 102, after the subtasks in the pre-queue of the transaction are processed, the method further includes:
s1021: checking whether a subtask request in a front queue of the transaction fails or not, and marking subtask adjustment of the request failure in a rear queue of the transaction;
s1022: checking whether the execution result of the subtask of the front queue of the transaction is correct or not, and marking the subtask adjustment with the wrong execution result into the rear queue of the transaction.
Preferably, in step 103, when executing the subtasks in the post-queue of the transaction, the method further includes:
s1031: circularly checking whether the subtasks in the post-queue of the transaction are requested to fail, and executing the subtasks again if the requests are requested to fail until all the subtasks in the post-queue of the transaction are executed or a set termination condition is met;
s1032: and circularly checking whether the execution result of the subtask in the post-queue of the transaction is correct or not, and executing the subtask with the wrong execution result again until all the subtasks in the post-queue of the transaction are executed or the set termination condition is met.
Preferably, after all the subtasks in the post-queue of the transaction are executed or the set termination condition is met, the method further comprises:
s105: checking that all the subtasks in the post queue of the transaction are completely executed or meet a set termination condition, and feeding back the subtasks with wrong execution results or the subtasks with failure execution to the personnel;
s106: and under the condition that the subtask is executed again, the subtask with the wrong execution result or the subtask with the failure execution is manually put into a post-queue, and the subtask with the wrong execution result or the subtask with the failure execution is executed again.
Preferably, step 101 specifically includes:
and when the number of all the subtasks of the transaction is lower than a preset threshold value, marking all the subtasks of the transaction in a front queue.
As shown in fig. 2, in combination with an embodiment of the present application, there is also provided an apparatus for hierarchical asynchronous transaction, including:
a subtask marking module 21, configured to divide a transaction into a plurality of subtasks according to an internal business logic relationship of the transaction, mark the subtasks that need to be processed before having a priority processing indication and having a priority processing indication compared to the internal business logic relationship in a front queue, and mark other subtasks of the transaction in a rear queue;
a subtask priority processing module 22, configured to traverse the subtasks of the transaction according to the task list, and execute the subtasks in the front queue of the transaction; according to the internal business logic relation of the transaction, parameters generated by executing the subtasks in the front-end queue are transmitted to other associated subtasks;
a subtask delay processing module 23, configured to execute a subtask in a post-queue of the transaction in combination with a parameter generated by the execution of a subtask line in the pre-queue after the subtask in the pre-queue of the transaction is processed;
and the transaction result feedback module 24 is used for feeding back and displaying the execution results of all the subtasks of the transaction.
Preferably, the subtask marking module 21 is specifically configured to:
differentiating internal business logic relations of subtasks of the transaction through a configuration file of the transaction, and setting the subtasks in a front queue and the subtasks in a rear queue through the subtask attribute in the configuration file; the subtask attributes in the configuration file include: the processing level of each subtask marks the sequential processing order relation of each subtask and other associated subtasks according to the internal logic relation of the business.
Preferably, the subtask priority processing module includes:
a first checking sub-module 221, configured to check whether a subtask request in a front queue of the transaction fails and/or check whether an execution result of a subtask in the front queue of the transaction is correct;
a subtask demotion sub-module 222, configured to, when the first inspection sub-module 221 detects that a subtask request in a front queue of the transaction fails, mark a subtask adjustment that requests fail into a rear queue of the transaction; and/or, when the first checking sub-module 221 checks that the execution result of the sub-task of the front queue of the transaction is wrong, marking the sub-task adjustment with wrong execution result into the rear queue of the transaction.
Preferably, the subtask delay processing module includes:
a second checking sub-checking module 231, configured to circularly check whether a sub-task in a post-queue of the transaction fails to request, and/or circularly check whether an execution result of the sub-task in the post-queue of the transaction is correct;
a retry sub-module 232, configured to execute the subtask again when the second checking sub-module 231 circularly checks that the subtask request in the post-queue of the transaction fails, until all the subtasks in the post-queue of the transaction are executed or a set termination condition is met;
and the compensation sub-module 233 is configured to execute the sub-task with the execution result error again when the second checking sub-module 231 circularly checks the execution result error of the sub-task in the post-queue of the transaction until all the sub-tasks in the post-queue of the transaction are executed or the set termination condition is met.
Preferably, the subtask delay processing module further includes:
a third checking sub-module 25, configured to check, after the sub-tasks in the post-queue of the transaction have been completely executed or the set termination condition is satisfied, the sub-tasks in the post-queue of the transaction have been completely executed or the set termination condition is satisfied; and feeding back the subtask with wrong execution result or the subtask with failure execution to the manual work;
and the manual checking retry sub-module 26 is configured to manually place the sub-task with the wrong execution result or the sub-task with the failure execution into the post-queue under the condition that the sub-task is executed again, and execute the sub-task with the wrong execution result or the sub-task with the failure execution again.
Preferably, the subtask marking module 21 is specifically configured to:
when all the numbers of the transactions are lower than a preset threshold value, all the subtasks of the transactions are marked in the front-end queue.
The embodiment of the application has the beneficial effects that:
1. the method comprises the following steps: in asynchronous transaction processing, a transaction is disassembled into a plurality of subtask modules, each subtask module has parallel relation, dependency relation and sequence relation, when in high load, the subtask module processes priority in the executing process, the subtask module can be executed with high importance degree preferentially, and asynchronous delay processing can be carried out again for some delay processing; therefore, the processing period of the whole transaction is reduced, and normal processing of the transaction queue is ensured to avoid accumulation. The problem that the whole transaction processing period is relatively long and the subsequent transaction incoming processing is influenced because all subtasks are finished by one execution flow in the prior art is avoided. Thereby causing queue accumulation and abnormal traffic. When the processing capacity is large, the subtask modules cannot be executed in a grading manner, and the defect of low real-time performance is overcome.
And subtasks can be defined as pre-processing and post-processing in terms of task attributes in traffic situations (number of transactions), and here the changes do not require code changes either.
2. Flexible: the complex logic of defining business logic business transaction according to the configuration file can realize parallel, sequential and dependency relationship according to the configuration file after decomposing a plurality of tasks. Flexible control in the configuration file can be achieved without changing codes if tasks are added or deleted.
3. Stable and reliable: if the execution failure of a single subtask does not affect the execution of the following subtasks in the execution process, when other subtasks have dependency relationship, all the subtasks of the executed transaction can be automatically checked in the front queue after the execution is finished, and the execution result of each subtask in the rear queue can be automatically checked after the execution is finished, and if the execution is not performed or the execution error is not performed, the subtasks can be put into the queue again for retry. An upper limit on the number of retries may be set based on the traffic characteristics, and the transaction execution failure may be considered when the upper limit is reached. A data consistency and completions check is also required after the entire transaction is completed. If a subtask is found to fail execution, compensation processing is needed. The defect that the subtask cannot judge whether to repeatedly execute the normal state when the subtask fails to execute the transaction due to the occurrence of the intermediate is avoided in the prior art.
4. When compensation is needed after the problem is manually checked, the transaction information is directly pushed to a post compensation queue to execute the failed subtask again. Until the last transaction is terminated. Thus, the success rate of the events is greatly improved through a self-repairing and compensating mechanism.
The foregoing technical solutions of the embodiments of the present application will be described in detail with reference to specific application examples, and reference may be made to the foregoing related description for details of the implementation process that are not described.
As shown in fig. 3 and 4, the present application provides a hierarchical asynchronous transaction solution, in which a user initiates a donation and opens a payment success transaction after the payment is successful in a micro public welfare project. The whole transaction is divided into updating project donation information, updating personal donation information, updating list information, sharing microblogs, notifying persons to whom the project belongs and the donators of private information respectively, recording running water, updating order information and checking payment information. The updated project information and the updated personal donation information are in parallel relation, the updated order information and the updated project information are in dependency relation, the record flow, the updated project and the personal donation information are in sequential relation, the processing priority of the private information notification is lower than that of other operations, and finally the payment information is checked to check all subtasks once.
When the user completes the payment at the paymate (microblog wallet, weChat, payment treasury, etc.), the paymate callback completes the relevant transaction until notification of the payment completion is included. In the asynchronous transaction processing of the application, one transaction may need to be disassembled into a plurality of subtask modules, each subtask module has parallel relation, dependency relation and sequence relation, when the processing capacity is relatively large, the subtask modules need to be executed in a grading way, priority execution with high instantaneity and high importance is required, and asynchronous delay processing can be performed again for some delay processing. The specific operation flow is as follows:
1. first, the transaction is started according to the message of the front queue, and the transfer units of the queues are transactions, namely one transaction corresponds to one queue, and each transaction comprises a plurality of tasks. When the processing amount is relatively large, the subtask modules need to be executed in a grading way, priority execution with high real-time performance and high importance is required, and asynchronous delay processing can be performed on some delay processing. Namely: dividing the transaction into a plurality of subtasks according to the internal business logic relation of the transaction, marking the subtasks which have priority processing marks and need to be processed before according to the internal business logic relation, which have priority processing marks, in a front queue, and marking other subtasks of the transaction in a rear queue.
2. Reading a configuration file according to the business name of the transaction to be executed to initialize transaction execution information (such as the information of fig. 4), distinguishing the business logic relation of the subtasks of the transaction through the configuration file of the transaction, and setting the subtasks in the front queue and the subtasks in the rear queue through the subtask attribute in the configuration file. Wherein, the configuration file is divided into two stages: the first level is the definition of the transaction, and the second level is the configuration of the subtasks under each transaction; transactions or subtasks may be added or deleted, which may be controlled flexibly via a configuration file without requiring code changes. Differentiating internal business logic relations of subtasks of the transaction through a configuration file of the transaction, and marking the subtasks in a front queue or a rear queue through subtask attributes in the configuration file; the subtask attributes in the configuration file include: the processing level mark of each subtask and the sequential processing sequence relation of each subtask and other associated subtasks according to the internal logic relation of the service, wherein the processing level mark comprises a priority processing mark, a delay processing mark and a re-processing mark; when a transaction begins to be processed to execute a subtask, the processing level mark of the subtask comprises a priority processing mark and a delay processing mark; the subtasks marked by the priority processing are executed in the front queue, and the subtasks marked by the delay processing are executed in the rear queue. In the subtask processing process, when the subtasks need to be executed again due to various reasons such as execution failure, a re-processing mark is used for identifying the subtasks so as to show that the subtasks need to be executed again.
In the logic flow diagram of FIG. 4, the global parameters include static configuration parameters and dynamic runtime parameters, wherein the static configuration parameters include:
pre_mcq_name: front queue name, post_mcq_name: post queue name, busname: service name, process: method set for service processing, ispost: whether to delay processing, limit_exec_num: upper limit of execution times, limit_exec_time: the upper limit of execution time. And, the dynamic runtime parameters include: param; class methods, variables, success; executing the result exec_num; number of executions, total_exec_time; total time of execution, last_exec_time; last execution time, result; and (3) a result set of the execution.
In FIG. 4, global parameters, delay queue names, and task lists are all used to initialize configuration; the task list includes method name, parameter and running state.
3. Sequentially executing according to the task list:
the order of execution of the subtasks in the task list is illustrated in the configuration file, and the task list functions to describe which processing logic the entire transaction needs to complete.
After a transaction is taken, the queue traverses one by one according to the task list information. In the pre-queue, whether each subtask runs is determined according to the configuration item and the execution condition in the task information in the configuration file.
Depending on the traffic situation (number of transactions), sub-tasks processed in the pre-queue and post-queue may be defined based on task attributes on the basis of a well-defined logical relationship. For example, if the number of the transactions is large, some subtasks can be processed in the front queue preferentially, and other subtasks are executed in the rear queue, so that the task execution number of the transactions is degraded, and the swallowing amount of the front queue is increased. When all transactions are low, all sub-tasks for the transaction are marked in the pre-queue.
4 each time a task is passed on to other task dependent parameters according to business logic, such as: the number of executions, total time consumption, last execution time and last execution result state; parameters for the associated traffic acquisition dependent on other tasks can be performed.
5. The task list is checked, and the processing is written into the post-queue according to the execution state and whether the processing is to be delayed. The method comprises the following steps:
the number of task lists and the initialization configuration of each transaction are unchanged, and the change is the state information (whether the execution is successful or not, the execution times and the time) after each operation; the condition for the delay processing judgment is that the configuration file is configured for each task.
The subtasks that are processed preferentially in the transaction are executed in the front queue and the subtasks that are not processed preferentially are processed downgraded once, while the subtasks that are processed lowly need to be completed in the back queue.
In the pre-queue, the subtasks are traversed and executed, and after the execution is finished, the completion condition of the whole subtasks needs to be checked to determine whether to rewrite some subtasks into the post-queue for delay processing (re-processing), and the delay processing is also to write the whole transaction into the post-queue. The case of the delay processing (re-processing) includes: (1) a destaged subtask in a pre-queue; and re-writing the subtasks which fail to be executed if the request fails in the front-end queue into the rear-end queue, and performing retry execution on the subtasks in the rear-end queue. (2) The subtasks in the front queue are executed but the execution result is wrong, then the transaction is rewritten into the rear queue, the related tasks are compensated for execution, the subtasks in the rear queue can be executed for a plurality of times, and therefore the retry and compensation of each subtask of the transaction are realized in the rear queue. The retry indicates that the subtask is executed again after the execution time-out (the request fails), and the compensation is that the execution result is wrong but the execution result is wrong, so that the subtask needs to be executed again.
6. Starting to repeat the steps 2-6 for the post-queue, namely:
step 2: reading a configuration file according to the business name of the transaction to be executed to initialize transaction execution information (such as the information of fig. 4), distinguishing the business logic relation of the subtasks of the transaction through the configuration file of the transaction, and setting the subtasks in the front queue and the subtasks in the rear queue through the subtask attribute in the configuration file.
Step 3: and sequentially executing the subtasks of the transactions arranged in the post-queue according to the task list.
Step 4: each time a task depends on business logic, it is passed on to other task dependent parameters such as: the number of executions, total time consumption, last execution time and last execution result state; parameters for the associated traffic acquisition dependent on other tasks can be performed.
Step 5: the task list is checked, and some subtasks are written into the post-queue again according to the execution state of the subtasks and whether to delay the processing. The method comprises the following steps:
(1) When the second checking sub-module circularly checks that the subtask request in the post-queue of the transaction fails, the subtask which is requested to fail is written into the post-queue again for retry; retry indicates that the subtask execution is performed again after a timeout (request failure),
(2) When the second checking sub-module circularly checks that the execution result of the sub-task in the post-queue of the transaction is wrong, the sub-task with the wrong execution result is written into the post-queue again for compensation operation; the compensation is executed with the result of execution but with the error of the result of execution, and needs to be executed again.
And (3) for the subtasks of the post-queue, the steps 2-6 are circulated until the completion of all execution is checked or the termination condition is met, the transaction calculation is completed, and the subtasks to be executed in the steps 2-6 are circulated in the delay queue.
7. If the subtask fails to execute or the execution result is wrong, the reasons are manually checked, wherein various reasons exist, such as: data exception, program bug, resource load, network reason, etc.; the failure cause is solved, then the execution is carried out in the post queue in the idle period of the resource, and the integrity of the transaction is ensured. For example, if the number of retries exceeds the upper limit, the execution failure of the transaction is fed back to the personnel, the personnel can check the problem, if the problem is the current resource load problem, the upper limit of the number of retries can be modified, and the post-queue can be restarted (re-processed) in the idle period.
The beneficial effects obtained by the application are as follows:
the application realizes the hierarchical processing of the transaction (one transaction comprises a plurality of subtasks) mainly by establishing the front queue and the rear queue, and realizes the integrity of the execution of the transaction by a configurable task list. The post-queue is also compatible with the functions of retry, compensation, fault tolerance, inspection and degradation of the transaction, and a complex transaction can degrade a plurality of tasks to be flexibly configured and executed, so that the success rate and service availability are improved.
1. The method comprises the following steps: in asynchronous transaction processing, a transaction is disassembled into a plurality of subtask modules, each subtask module has parallel relation, dependency relation and sequence relation, when in high load, the subtask module processes priority in the executing process, the subtask module can be executed with high importance degree preferentially, and asynchronous delay processing can be carried out again for some delay processing; therefore, the processing period of the whole transaction is reduced, and normal processing of the transaction queue is ensured to avoid accumulation. The problem that the whole transaction processing period is relatively long and the subsequent transaction incoming processing is influenced because all subtasks are finished by one execution flow in the prior art is avoided. Thereby causing queue accumulation and abnormal traffic. When the processing capacity is large, the subtask modules cannot be executed in a grading manner, and the defect of low real-time performance is overcome.
And subtasks can be defined as pre-processing and post-processing in terms of task attributes in traffic situations (number of transactions), and here the changes do not require code changes either.
2. Flexible: the complex logic of defining business logic business transaction according to the configuration file can realize parallel, sequential and dependency relationship according to the configuration file after decomposing a plurality of tasks. Flexible control in the configuration file can be achieved without changing codes if tasks are added or deleted.
3. Stable and reliable: if the execution failure of a single subtask does not affect the execution of the following subtasks in the execution process, when other subtasks have dependency relationship, all the subtasks of the executed transaction can be automatically checked in the front queue after the execution is finished, and the execution result of each subtask in the rear queue can be automatically checked after the execution is finished, and if the execution is not performed or the execution error is not performed, the subtasks can be put into the queue again for retry. An upper limit on the number of retries may be set based on the traffic characteristics, and the transaction execution failure may be considered when the upper limit is reached. A data consistency and completions check is also required after the entire transaction is completed. If a subtask is found to fail execution, compensation processing is needed. The defect that the subtask cannot judge whether to repeatedly execute the normal state when the subtask fails to execute the transaction due to the occurrence of the intermediate is avoided in the prior art.
4. When compensation is needed after the problem is manually checked, the transaction information is directly pushed to a post compensation queue to execute the failed subtask again. Until the last transaction is terminated. Thus, the success rate of the events is greatly improved through a self-repairing and compensating mechanism.
It should be understood that the specific order or hierarchy of steps in the processes disclosed are examples of exemplary approaches. Based on design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged without departing from the scope of the present disclosure. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented.
In the foregoing detailed description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the subject matter require more features than are expressly recited in each claim. Rather, as the following claims reflect, application lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate preferred embodiment of this application.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. As will be apparent to those skilled in the art; various modifications to these embodiments will be readily apparent, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the aforementioned embodiments, but one of ordinary skill in the art may recognize that many further combinations and permutations of various embodiments are possible. Accordingly, the embodiments described herein are intended to embrace all such alterations, modifications and variations that fall within the scope of the appended claims. Furthermore, as used in the specification or claims, the term "comprising" is intended to be inclusive in a manner similar to the term "comprising," as interpreted when employed as a transitional word in a claim. Furthermore, any use of the term "or" in the specification of the claims is intended to mean "non-exclusive or".
Those of skill in the art will further appreciate that the various illustrative logical blocks (illustrative logical block), units, and steps described in connection with the embodiments of the application may be implemented by electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components (illustrative components), elements, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design requirements of the overall system. Those skilled in the art may implement the described functionality in varying ways for each particular application, but such implementation is not to be understood as beyond the scope of the embodiments of the present application.
The various illustrative logical blocks or units described in the embodiments of the application may be implemented or performed with a general purpose processor, a digital signal processor, an Application Specific Integrated Circuit (ASIC), a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described. A general purpose processor may be a microprocessor, but in the alternative, the general purpose processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, or any other similar configuration.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. In an example, a storage medium may be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC, which may reside in a user terminal. In the alternative, the processor and the storage medium may reside as distinct components in a user terminal.
In one or more exemplary designs, the above-described functions of embodiments of the present application may be implemented in hardware, software, firmware, or any combination of the three. If implemented in software, the functions may be stored on a computer-readable medium or transmitted as one or more instructions or code on the computer-readable medium. Computer readable media includes both computer storage media and communication media that facilitate transfer of computer programs from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer. For example, such computer-readable media may include, but is not limited to, RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to carry or store program code in the form of instructions or data structures and other data structures that may be read by a general or special purpose computer, or a general or special purpose processor. Further, any connection is properly termed a computer-readable medium, e.g., if the software is transmitted from a website, server, or other remote source via a coaxial cable, fiber optic cable, twisted pair, digital Subscriber Line (DSL), or wireless such as infrared, radio, and microwave, and is also included in the definition of computer-readable medium. The disks (disks) and disks (disks) include compact disks, laser disks, optical disks, DVDs, floppy disks, and blu-ray discs where disks usually reproduce data magnetically, while disks usually reproduce data optically with lasers. Combinations of the above may also be included within the computer-readable media.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the application, and is not meant to limit the scope of the application, but to limit the application to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the application are intended to be included within the scope of the application.

Claims (10)

1. A method of hierarchical asynchronous processing of transactions, comprising:
dividing a transaction into a plurality of subtasks according to the internal business logic relation of the transaction, marking the subtasks which have priority processing marks and need to be processed before according to the internal business logic relation, which have priority processing marks, in a front queue, and marking other subtasks of the transaction in a rear queue;
traversing the subtasks of the transaction according to the task list, executing the subtasks in the front queue of the transaction, and executing the subtasks once in the front queue; according to the internal business logic relation of the transaction, parameters generated by executing the subtasks in the front-end queue are transmitted to other associated subtasks;
after the subtasks in the front queue of the transaction are processed, the subtasks in the rear queue of the transaction are executed by combining parameters generated by the subtasks in the front queue;
feeding back and displaying the execution results of all subtasks of the transaction;
after the subtasks in the front queue of the transaction are processed, the method further comprises the following steps:
checking whether a subtask request in a front queue of the transaction fails or not, and marking subtask adjustment of the request failure in a rear queue of the transaction;
and/or the number of the groups of groups,
checking whether the execution result of the subtask of the front queue of the transaction is correct or not, and marking the subtask adjustment with the wrong execution result into the rear queue of the transaction.
2. The method for hierarchical asynchronous transaction processing according to claim 1, wherein the dividing the transaction into a plurality of sub-tasks according to the internal business logic relationship of the transaction, marking sub-tasks having a priority processing designation and requiring pre-processing according to the internal business logic relationship than having a priority processing designation in a pre-queue, and marking other sub-tasks of the transaction in a post-queue, specifically comprises:
differentiating internal business logic relations of subtasks of the transaction through a configuration file of the transaction, and marking the subtasks in a front queue or a rear queue through subtask attributes in the configuration file; the subtask attributes in the configuration file include: the processing level of each subtask marks the sequential processing order relation of each subtask and other associated subtasks according to the internal logic relation of the business.
3. The method of hierarchical asynchronous processing of transactions according to claim 1, wherein said executing sub-tasks within a post-queue of said transactions further comprises:
circularly checking whether the subtasks in the post-queue of the transaction are requested to fail, and executing the subtasks again if the requests are requested to fail until all the subtasks in the post-queue of the transaction are executed or a set termination condition is met;
and/or the number of the groups of groups,
and circularly checking whether the execution result of the subtask in the post-queue of the transaction is correct or not, and executing the subtask with the wrong execution result again until all the subtasks in the post-queue of the transaction are executed or the set termination condition is met.
4. A method of hierarchical asynchronous processing transactions according to claim 3, characterized in that after said subtasks in said post-queue up to said transaction are all executed or a set termination condition is met, it further comprises:
checking that all the subtasks in the post queue of the transaction are completely executed or meet a set termination condition, and feeding back the subtasks with wrong execution results or the subtasks with failure execution to the personnel;
and under the condition that the subtask is executed again, the subtask with the wrong execution result or the subtask with the failure execution is manually put into a post-queue, and the subtask with the wrong execution result or the subtask with the failure execution is executed again.
5. The method for hierarchical asynchronous transaction processing according to claim 1, wherein the dividing the transaction into a plurality of sub-tasks according to the internal business logic relationship of the transaction, marking sub-tasks having a priority processing designation and requiring pre-processing according to the internal business logic relationship than having a priority processing designation in a pre-queue, and marking other sub-tasks of the transaction in a post-queue, specifically comprises:
and when the number of all the subtasks of the transaction is lower than a preset threshold value, marking all the subtasks of the transaction in a front queue.
6. An apparatus for hierarchical asynchronous processing of transactions, comprising:
the subtask marking module is used for dividing the transaction into a plurality of subtasks according to the internal business logic relation of the transaction, marking the subtasks which have priority processing marks and need to be processed before according to the internal business logic relation, which have priority processing marks, in a front queue, and marking other subtasks of the transaction in a rear queue;
the subtask priority processing module is used for traversing the subtasks of the transaction according to the task list, executing the subtasks in the front queue of the transaction, and executing the subtasks once in the front queue; according to the internal business logic relation of the transaction, parameters generated by executing the subtasks in the front-end queue are transmitted to other associated subtasks;
the subtask delay processing module is used for executing the subtasks in the post-queue of the transaction by combining the parameters generated by the execution of the subtask lines in the pre-queue after the subtasks in the pre-queue of the transaction are processed;
the transaction result feedback module is used for feeding back and displaying the execution results of all subtasks of the transaction;
the subtask priority processing module comprises:
a first checking sub-module, configured to check whether a subtask request in a front queue of the transaction fails and/or check whether an execution result of a subtask in the front queue of the transaction is correct;
the subtask degradation sub-module is used for marking the subtask adjustment of the request failure into the post-queue of the transaction when the first checking sub-module detects the subtask request failure in the pre-queue of the transaction; and/or when the first checking sub-module detects that the execution result of the sub-task of the front queue of the transaction is wrong, marking the sub-task adjustment with the wrong execution result into the rear queue of the transaction.
7. The apparatus for hierarchical asynchronous transaction processing according to claim 6, wherein the subtask tagging module is specifically configured to:
differentiating internal business logic relations of subtasks of the transaction through a configuration file of the transaction, and setting the subtasks in a front queue and the subtasks in a rear queue through the subtask attribute in the configuration file; the subtask attributes in the configuration file include: the processing level of each subtask marks the sequential processing order relation of each subtask and other associated subtasks according to the internal logic relation of the business.
8. The apparatus for hierarchical asynchronous processing of transactions according to claim 6, wherein said subtask delay processing module comprises:
a second checking sub-module, configured to circularly check whether a subtask in a post-queue of the transaction fails to request, and/or circularly check whether an execution result of the subtask in the post-queue of the transaction is correct;
the retry sub-module is used for executing the subtasks again when the second checking sub-module circularly checks that the subtask requests in the post-queue of the transaction fail until all the subtasks in the post-queue of the transaction are executed or the set termination condition is met;
and the compensation sub-module is used for executing the sub-task with the wrong execution result again when the second checking sub-module circularly checks the execution result error of the sub-task in the post-queue of the transaction until all the sub-tasks in the post-queue of the transaction are executed or the set termination condition is met.
9. The apparatus for hierarchical asynchronous processing of transactions according to claim 8, wherein said subtask delay processing module further comprises:
a third checking sub-module, configured to check, after the sub-tasks in the post-queue of the transaction have been completely executed or meet a set termination condition, the sub-tasks in the post-queue of the transaction have been completely executed or meet a set termination condition; and feeding back the subtask with wrong execution result or the subtask with failure execution to the manual work;
and the manual checking retry sub-module is used for manually placing the sub-task with the wrong execution result or the sub-task with the failure execution into a post-queue under the condition of meeting the requirement of re-executing the sub-task, and re-executing the sub-task with the wrong execution result or the sub-task with the failure execution.
10. The apparatus for hierarchical asynchronous transaction processing according to claim 6, wherein the subtask tagging module is specifically configured to:
when all the numbers of the transactions are lower than a preset threshold value, all the subtasks of the transactions are marked in the front-end queue.
CN202010249946.7A 2020-04-01 2020-04-01 Method and device for processing transactions in hierarchical and asynchronous mode Active CN111580939B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010249946.7A CN111580939B (en) 2020-04-01 2020-04-01 Method and device for processing transactions in hierarchical and asynchronous mode

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010249946.7A CN111580939B (en) 2020-04-01 2020-04-01 Method and device for processing transactions in hierarchical and asynchronous mode

Publications (2)

Publication Number Publication Date
CN111580939A CN111580939A (en) 2020-08-25
CN111580939B true CN111580939B (en) 2023-09-01

Family

ID=72126100

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010249946.7A Active CN111580939B (en) 2020-04-01 2020-04-01 Method and device for processing transactions in hierarchical and asynchronous mode

Country Status (1)

Country Link
CN (1) CN111580939B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112988428A (en) * 2021-04-26 2021-06-18 南京蜂泰互联网科技有限公司 Distributed message asynchronous notification middleware implementation method and system
CN113592228A (en) * 2021-06-29 2021-11-02 中国红十字基金会 Red balloon race management system
CN115601195B (en) * 2022-10-17 2023-09-08 桂林电子科技大学 Transaction bidirectional recommendation system and method based on real-time label of power user

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6058389A (en) * 1997-10-31 2000-05-02 Oracle Corporation Apparatus and method for message queuing in a database system
CN101216783A (en) * 2007-12-29 2008-07-09 中国建设银行股份有限公司 Process for optimizing ordering processing for multiple affairs
CN101882161A (en) * 2010-06-23 2010-11-10 中国工商银行股份有限公司 Application level asynchronous task scheduling system and method
CN102508716A (en) * 2011-09-29 2012-06-20 用友软件股份有限公司 Task control device and task control method
CN102981904A (en) * 2011-09-02 2013-03-20 阿里巴巴集团控股有限公司 Task scheduling method and system
CN104158699A (en) * 2014-08-08 2014-11-19 广州新科佳都科技有限公司 Data acquisition method based on priority and segmentation
CN105068864A (en) * 2015-07-24 2015-11-18 北京京东尚科信息技术有限公司 Method and system for processing asynchronous message queue
WO2018015965A1 (en) * 2016-07-19 2018-01-25 Minacs Private Limited System and method for efficiently processing transactions by automating resource allocation
CN109558237A (en) * 2017-09-27 2019-04-02 北京国双科技有限公司 A kind of task status management method and device
CN109660612A (en) * 2018-12-11 2019-04-19 北京潘达互娱科技有限公司 A kind of request processing method and server
CN109885382A (en) * 2019-01-16 2019-06-14 深圳壹账通智能科技有限公司 The system of cross-system distributed transaction processing method and distributing real time system
CN109933611A (en) * 2019-02-22 2019-06-25 深圳达普信科技有限公司 A kind of adaptive collecting method and system
CN110046041A (en) * 2019-04-15 2019-07-23 北京中安智达科技有限公司 A kind of collecting method based on celery Scheduling Framework
CN110221927A (en) * 2019-06-03 2019-09-10 中国工商银行股份有限公司 Asynchronous message processing method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10387081B2 (en) * 2017-03-24 2019-08-20 Western Digital Technologies, Inc. System and method for processing and arbitrating submission and completion queues

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6058389A (en) * 1997-10-31 2000-05-02 Oracle Corporation Apparatus and method for message queuing in a database system
CN101216783A (en) * 2007-12-29 2008-07-09 中国建设银行股份有限公司 Process for optimizing ordering processing for multiple affairs
CN101882161A (en) * 2010-06-23 2010-11-10 中国工商银行股份有限公司 Application level asynchronous task scheduling system and method
CN102981904A (en) * 2011-09-02 2013-03-20 阿里巴巴集团控股有限公司 Task scheduling method and system
CN102508716A (en) * 2011-09-29 2012-06-20 用友软件股份有限公司 Task control device and task control method
CN104158699A (en) * 2014-08-08 2014-11-19 广州新科佳都科技有限公司 Data acquisition method based on priority and segmentation
CN105068864A (en) * 2015-07-24 2015-11-18 北京京东尚科信息技术有限公司 Method and system for processing asynchronous message queue
WO2018015965A1 (en) * 2016-07-19 2018-01-25 Minacs Private Limited System and method for efficiently processing transactions by automating resource allocation
CN109558237A (en) * 2017-09-27 2019-04-02 北京国双科技有限公司 A kind of task status management method and device
CN109660612A (en) * 2018-12-11 2019-04-19 北京潘达互娱科技有限公司 A kind of request processing method and server
CN109885382A (en) * 2019-01-16 2019-06-14 深圳壹账通智能科技有限公司 The system of cross-system distributed transaction processing method and distributing real time system
CN109933611A (en) * 2019-02-22 2019-06-25 深圳达普信科技有限公司 A kind of adaptive collecting method and system
CN110046041A (en) * 2019-04-15 2019-07-23 北京中安智达科技有限公司 A kind of collecting method based on celery Scheduling Framework
CN110221927A (en) * 2019-06-03 2019-09-10 中国工商银行股份有限公司 Asynchronous message processing method and device

Also Published As

Publication number Publication date
CN111580939A (en) 2020-08-25

Similar Documents

Publication Publication Date Title
CN111580939B (en) Method and device for processing transactions in hierarchical and asynchronous mode
US20190340166A1 (en) Conflict resolution for multi-master distributed databases
US20110179398A1 (en) Systems and methods for per-action compiling in contact handling systems
WO2020181810A1 (en) Data processing method and apparatus applied to multi-level caching in cluster
US20110179304A1 (en) Systems and methods for multi-tenancy in contact handling systems
CN107038645B (en) Service processing method, device and system and server
CN109634989B (en) HIVE task execution engine selection method and system
CN113157710B (en) Block chain data parallel writing method and device, computer equipment and storage medium
CN110599341A (en) Transaction calling method and system
CN111400011A (en) Real-time task scheduling method, system, equipment and readable storage medium
WO2020253045A1 (en) Configured supplementary processing method and device for data of which forwarding has abnormality, and readable storage medium
CN114138838A (en) Data processing method and device, equipment and medium
CN112035230B (en) Task scheduling file generation method, device and storage medium
CN106776153B (en) Job control method and server
US10761940B2 (en) Method, device and program product for reducing data recovery time of storage system
US20230230097A1 (en) Consensus key locking with fast local storage for idempotent transactions
CN115687491A (en) Data analysis task scheduling system based on relational database
CN115629920A (en) Data request exception handling method and device and computer readable storage medium
CN111741080B (en) Network file distribution method and device
CN110716972A (en) Method and device for processing error of high-frequency calling external interface
WO2019134238A1 (en) Method for executing auxiliary function, device, storage medium, and terminal
CN110716798A (en) PHP (hypertext preprocessor) timing task management method and system
CN116643733B (en) Service processing system and method
CN113157411B (en) Celery-based reliable configurable task system and device
CN116471213B (en) Link tracking method, link tracking system and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant