CN110457123B - Control method and device for block processing task - Google Patents

Control method and device for block processing task Download PDF

Info

Publication number
CN110457123B
CN110457123B CN201910711607.3A CN201910711607A CN110457123B CN 110457123 B CN110457123 B CN 110457123B CN 201910711607 A CN201910711607 A CN 201910711607A CN 110457123 B CN110457123 B CN 110457123B
Authority
CN
China
Prior art keywords
block
processing
processed
stage
subtask
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910711607.3A
Other languages
Chinese (zh)
Other versions
CN110457123A (en
Inventor
刘长辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910711607.3A priority Critical patent/CN110457123B/en
Publication of CN110457123A publication Critical patent/CN110457123A/en
Application granted granted Critical
Publication of CN110457123B publication Critical patent/CN110457123B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/219Managing data history or versioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2365Ensuring data consistency and integrity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/215Flow control; Congestion control using token-bucket
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Computing Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a method and a device for controlling block processing tasks, belonging to the technical field of block chains, wherein the method and the device provided by the invention divide the block processing tasks into continuous multi-stage processing subtasks, configure at least one execution module of a corresponding stage for each stage of processing subtasks, and each non-last stage execution module executes the current stage of processing subtasks on the current block to be processed, then add the block processing tasks of the current block to be processed into an intermediate task queue configured between the current block to be processed and the next stage execution module, after each non-first stage execution module executes the current stage of processing subtasks on the current block to be processed, take the block corresponding to the tasks extracted from the intermediate task queue as the next block to be processed, and the last stage execution module synchronizes the processing results into a database, thereby realizing the parallel processing of the block processing tasks of a plurality of blocks to be processed, and further, the processing speed of the inter-block processing task is increased.

Description

Control method and device for block processing task
The invention is named as invention with application date of 2018, 09 and 13 and application number of 201811069813.0 The invention discloses a method and a device for controlling a block processing task.
Technical Field
The present invention relates to the field of block chain technologies, and in particular, to a method and an apparatus for controlling a block processing task.
Background
The problem of poor transaction performance generally exists in the current block chain technology, and the application requirement in a real scene cannot be met by lower performance. The performance problem is mainly reflected in two aspects: (1) the consensus process is complicated and lengthy; (2) chained data serial processing is inefficient.
In the alliance chain, only authorized users can join the block chain, such as hyper book (hyper folder Fabric), and the consensus process is simpler and more efficient than that of public chains such as bitcoin and ether house, so that the problem of low efficiency of serial processing of link data in the alliance chain such as Fabric is particularly outstanding. In the block chain technology, due to the chain dependency between blocks, the prior art uses a single-thread serial execution mode in the process of submitting blocks, namely: the submission of a plurality of blocks on a chain must follow a strict sequence, the latter block must wait for the completion of the submission of the former block before submitting, and the latter block must link the former block to form an ordered chain, and the whole process is executed serially. However, the serial processing mode obviously cannot fully exert the multi-core performance of a modern Central Processing Unit (CPU), the block submission process is long in time consumption and low in transaction throughput rate, and the application requirements in a real scene cannot be met.
Therefore, how to speed up the processing process of block submission and improve the processing efficiency of the block processing task is one of the primary considerations.
Disclosure of Invention
The embodiment of the invention provides a control method and a control device for block processing tasks, which are used for improving the processing efficiency of the block processing tasks.
In a first aspect, an embodiment of the present invention provides a method for controlling a block processing task, where the block processing task is divided into consecutive multi-level processing subtasks, and at least one execution module corresponding to each level is configured for each level of the processing subtasks; and the method comprises:
at least one non-last-stage execution module executes the current-stage processing subtask on the current block to be processed and then continues to execute the current-stage processing subtask on the next block to be processed, wherein:
and the last-stage execution module executes the last-stage processing subtask on the current block to be processed and synchronizes the processing result into the database.
By adopting the method, the parallel processing of the blocks is effectively realized, the processing efficiency of the blocks is improved, the transaction throughput rate of a block chain system is also improved, and the performance requirements of more applications on the block chain are met.
In a second aspect, an embodiment of the present invention provides a device for controlling a block processing task, including:
the splitting module is used for dividing the block processing task into continuous multi-stage processing subtasks, and at least one execution module corresponding to each stage of processing subtask is configured;
at least one non-last-stage execution module, configured to execute the current-stage processing sub-task on the current block to be processed and then continue to execute the current-stage processing sub-task on the next block to be processed;
and the last-stage execution module is used for executing the last-stage processing subtask on the current block to be processed and synchronizing the processing result into the database.
On one hand, the embodiment of the invention provides a control method of block processing tasks, the block processing tasks are divided into continuous multi-stage processing subtasks, at least one execution module corresponding to each stage of processing subtask is configured, an intermediate task queue is configured between adjacent execution modules, a first-stage execution module is configured with a block task queue, and tasks in the block task queue are written according to the receiving sequence of blocks to be processed according to the first-in first-out principle; and the method comprises:
after the first-stage execution module executes the current-stage processing subtask on the current block to be processed, taking a block corresponding to the task extracted from the block task queue according to a first-in first-out principle as a next block to be processed;
after each non-last-stage execution module executes the current-stage processing subtask on the current block to be processed, the block processing task of the current block to be processed is added into an intermediate task queue configured between the current block to be processed and a next-stage execution module according to a first-in first-out principle; and
after each non-first-stage execution module executes the current-stage processing subtask on the current block to be processed, taking a block corresponding to a task extracted according to a first-in first-out principle from an intermediate task queue configured between the non-first-stage execution modules as a next block to be processed; and the last-stage execution module executes the last-stage processing subtask on the current block to be processed and synchronizes the processing result to the database.
In one aspect, an embodiment of the present invention provides a device for controlling a block processing task, including:
the splitting module is used for dividing the block processing task into continuous multi-stage processing subtasks;
the configuration module is used for configuring at least one execution module of a corresponding level for each level of processing subtask, configuring an intermediate task queue for each adjacent execution module, and configuring a block task queue for the first level of execution module, wherein tasks in the block task queue are written in according to the receiving sequence of the blocks to be processed and the first-in first-out principle;
the first-stage execution module is used for executing the current-stage processing subtask on the current block to be processed, and then taking a block corresponding to the task extracted from the block task queue according to a first-in first-out principle as a next block to be processed;
each non-last-stage execution module is used for adding the block processing task of the current block to be processed into an intermediate task queue configured between the current block to be processed and the next-stage execution module according to the first-in first-out principle after executing the current-stage processing subtask on the current block to be processed; and
each non-first-stage execution module is used for executing the current-stage processing subtask on the current block to be processed, and then taking a block corresponding to a task extracted according to a first-in first-out principle from an intermediate task queue configured between the previous-stage execution modules as a next block to be processed; and the last-stage execution module is used for executing the last-stage processing subtask on the current block to be processed and synchronizing the processing result into the database.
In a third aspect, an embodiment of the present invention provides a computer-readable medium, in which computer-executable instructions are stored, where the computer-executable instructions are used to execute the control method for block processing tasks provided in this application.
In a fourth aspect, an embodiment of the present invention provides an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the control method for block processing task provided by the application.
The invention has the beneficial effects that:
the method and the device for controlling the block processing task provided by the embodiment of the invention divide the block processing task into the continuous multi-stage processing subtasks, configure at least one execution module corresponding to each stage of the processing subtasks, and continuously execute the current-stage processing subtask on the next block to be processed after each-stage execution module executes the current-stage processing subtask on the current block to be processed. In this way, on one hand, each level of processing subtasks can be executed by each level of execution module in sequence; on the other hand, each level of execution module continues to execute the current-level processing subtask of the next block to be processed after the current-level processing subtask of the current block to be processed is executed, so that the block processing tasks of a plurality of blocks to be processed are processed in parallel, and the processing speed of the inter-block processing tasks is increased.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a schematic structural diagram of a computing device implementing a control method for block processing tasks according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating a method for controlling a block processing task according to an embodiment of the present invention;
fig. 3a is a schematic diagram illustrating an effect that each level of processing subtask corresponds to one execution module according to an embodiment of the present invention;
FIG. 3b is a schematic diagram illustrating the effect of each stage of execution modules processing each block to be processed according to the embodiment of the present invention;
FIG. 4 is a diagram illustrating the effect of writing or reading block processing tasks in a block task queue according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating the effect of task writing and reading in the intermediate task queue according to an embodiment of the present invention;
FIG. 6a is a timing diagram illustrating the possible execution of the version numbers of the inter-block processed data D1;
FIG. 6b is a timing diagram illustrating the execution of the version number of the inter-block processing data D1 according to an embodiment of the present invention;
fig. 7 is a second flowchart illustrating a control method for block processing tasks according to an embodiment of the present invention;
fig. 8a is a schematic diagram illustrating an effect of performing account version verification after configuring a pre-commit cache according to an embodiment of the present invention;
fig. 8b is a second schematic diagram illustrating an effect of performing account version verification after configuring a pre-commit cache according to the embodiment of the present invention;
fig. 8c is a third schematic view illustrating an effect of performing account version verification after configuring a pre-commit cache according to the embodiment of the present invention;
fig. 9a is a flowchart illustrating a method for controlling a task submitting block according to an embodiment of the present invention;
fig. 9b is a schematic diagram illustrating an effect of executing the five subtasks by each stage of the execution module according to the embodiment of the present invention;
fig. 10 is a schematic diagram illustrating an effect of setting a pre-commit cache in a commit process of a block according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of a control device for block processing tasks according to an embodiment of the present invention.
Detailed Description
The control method and the control device for the block processing task are used for improving the processing efficiency of the block processing task.
The preferred embodiments of the present invention will be described below with reference to the accompanying drawings of the specification, it being understood that the preferred embodiments described herein are merely for illustrating and explaining the present invention, and are not intended to limit the present invention, and that the embodiments and features of the embodiments in the present invention may be combined with each other without conflict.
To facilitate understanding of the invention, the present invention relates to technical terms in which:
1. and (3) block submission: block submission in the super ledger book includes the following stages: verifying validity of a block, Multi-Version concurrency control (MVCC) of account data, block book file writing, updating of state data of transactions in a book file, and the like, wherein: the validity of the verification block is used for judging signature information of each transaction in the block, authority control of the transaction, whether the transaction conforms to an endorsement strategy and the like; the MVCC verification of the account data is used for verifying whether the version number of the reading set of each transaction in the block is consistent with the version number in the local account book; the ledger file contains all the read-write sets involved in the transaction; updating of the state data of the transaction in the ledger file refers to writing the block index data and the account history change record into the database and updating the account data (world state) to the database.
2. The token bucket is a common flow control technology, and the method of the token bucket in the invention means that when a data packet is sent, a plurality of tokens need to be applied according to attributes such as the size of the data packet, and a plurality of tokens need to be deleted from the token bucket; if there are not enough tokens in the token bucket, that is, there are not enough tokens to send the data packet, the data packet needs to wait to be sent until there are enough tokens in the token bucket, so as to control the sending speed of the data packet; the method of the token bucket is applied to the invention, the speed of the execution module for processing the subtasks can be controlled, and the consistency of data in the blocks connected in the block chain can be further met.
3. The multi-stage processing subtasks are not to say that the block processing tasks are divided into processing subtasks with a plurality of levels, the processing subtasks of each level are in a parallel relationship and not in a dependent relationship, and the multi-stage processing subtasks are only used for limiting the execution sequence, namely, the first-stage processing tasks are executed firstly, and then the second-stage processing subtasks, the third-stage processing subtasks, the fourth-stage processing subtasks and the like are executed in sequence.
4. The write set refers to a write data set formed by all modified data related to transactions in the block, and the write data set is referred to as the write set in the present invention because the modified data needs to be written into the database. The modified data in the write set may be: the transaction in the block is modified by invoking the simulated execution process of the smart contract and its contents. For example, if A transfers 50 elements to B, the modification data may be the balance of the A account and the balance of the B account, etc.
5. The read set refers to that in the process of verifying the block account version, data needs to be read from a database or a pre-submission cache to verify the account version of the transaction related to the block, so that the read data form a read data set, which is called a read set for short. The data in the read set may be: the transaction in the block reads the state data and the version number thereof by calling the simulation execution process of the intelligent contract.
Due to the chain dependence among the blocks, the prior art often uses a single-thread serial execution mode in the process of submitting the blocks, so that the multi-core performance of a modern CPU cannot be fully exerted, the time consumption of the block submitting process is long, the transaction throughput rate in the blocks is low, and the application requirements in the actual scene cannot be met.
In order to solve the problems of long time consumption and low block processing efficiency of the processing process submitted by the block in the prior art, the embodiment of the invention provides a solution, and provides a computing device 10, wherein the computing device 10 is used for implementing the control method of the block processing task provided by the invention, the computing device can be represented in the form of general computing equipment, and the general computing equipment can be a terminal or a server and the like. A computing device 10 according to the present invention is described below with reference to fig. 1. The computing device 10 shown in FIG. 1 is only one example and should not be taken to limit the scope of use and functionality of embodiments of the present invention.
As shown in FIG. 1, computing device 10 is embodied in the form of a general purpose computing device. Components of computing device 10 may include, but are not limited to: the at least one processing unit 11, the at least one memory unit 12, and a bus 13 connecting the various system components (including the memory unit 12 and the processing unit 11).
Bus 13 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a processor, or a local bus using any of a variety of bus architectures.
The storage unit 12 may include readable media in the form of volatile memory, such as Random Access Memory (RAM)121 and/or cache memory 122, and may further include Read Only Memory (ROM) 123.
The storage unit 12 may also include a program/utility 125 having a set (at least one) of program modules 124, such program modules 124 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Computing device 10 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, etc.), with one or more devices that enable a user to interact with computing device 10, and/or with any devices (e.g., router, modem, etc.) that enable computing device 10 to communicate with one or more other computing devices. Such communication may be via an input/output (I/O) interface 15. Moreover, computing device 10 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via network adapter 16. As shown, network adapter 16 communicates with other modules for computing device 10 over bus 13. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with computing device 10, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
An application scenario of the method for controlling a block processing task provided in the embodiment of the present invention is that, a computing device 10 implementing the method for controlling a block processing task in the present invention may be a node in a block chain, when the computing device 10 receives a block sent by other nodes in a broadcast form, the computing device 10 divides the block processing task of the block into consecutive multi-level processing tasks, and creates at least one execution module corresponding to each level of processing subtask, and then each level of execution module executes the current level of processing subtask on the current block to be processed and then continues to execute the current level of processing subtask on the next block to be processed, where: and the last-stage execution module executes the last-stage processing subtask on the current block to be processed and synchronizes the processing result into the database. By adopting the method provided by the invention, the execution modules are configured for the multi-stage processing subtasks, on one hand, the execution modules at all stages can sequentially execute the processing subtasks at all stages of each block to be processed, and on the other hand, each execution module continues to execute the processing subtask at the current stage of the next block to be processed after the execution of the processing subtask at the current stage of the current block to be processed is finished, so that the block processing tasks of a plurality of blocks to be processed are processed in parallel, and the processing speed of the block processing tasks among the blocks is increased. According to the execution time of the processing subtasks, for the processing subtasks with longer execution time, a plurality of execution modules with corresponding levels can be created, and the plurality of execution modules can process in parallel, so that the processing speed can be further increased.
A control method of a tile processing task provided according to an exemplary embodiment of the present invention is described below with reference to fig. 2 to 11 in conjunction with fig. 1 and the application scenario described above. It should be noted that the above application scenarios are merely illustrated for the convenience of understanding the spirit and principles of the present invention, and the embodiments of the present invention are not limited in this respect. Rather, embodiments of the present invention may be applied to any scenario where applicable.
As shown in fig. 2, taking an example that each level of processing subtask is configured with an execution module at a corresponding level, the flow of the control method for block processing tasks according to the embodiment of the present invention may include the following steps:
and S11, dividing the block processing task into continuous multi-stage processing subtasks.
Specifically, the blockchain has a plurality of nodes therein, each of which has a computing device 10 disposed therein. After the computing device 10 of each node generates a block, the block is sent to each of the other nodes in a broadcast manner, so that after the computing device 10 of each node receives the block, on one hand, transaction data in the block is stored, on the other hand, a block processing task for processing the block needs to be established to complete verification and submission of the block, and only after the block is verified, the block can be linked to the block chain on its own node. The control method of the block processing task provided by the invention only processes the block processing task established by the received block so as to complete the processing process of submitting the block.
In this step, the execution module in the present invention may be, but is not limited to, a thread or a coroutine. Specifically, a plurality of execution modules for processing the block processing task may be established in advance, and then different processing functions may be assigned to the execution modules according to the actual processing procedures of the block processing task. Thus, when the computing device 10 receives the block processing task, the block processing task is first divided into multiple levels of processing subtasks, and then an execution module corresponding to each level is configured for each level of processing subtask. Therefore, for each stage of processing subtask, the corresponding execution module executes the stage of processing subtask, thereby ensuring that each stage of processing subtask can be processed.
In the present invention, when the block processing task is divided, the processing subtasks may be added, deleted, or replaced according to actual situations. And the present invention does not limit the number of divided processing subtasks.
S12, at least one non-last-stage execution module executes the current-stage processing subtask on the current block to be processed and then continues to execute the current-stage processing subtask on the next block to be processed.
In this step, in order to improve the processing efficiency of the block processing task, the execution module provided in the present invention continues to execute the current-level processing subtask on the next block to be processed after executing the current-level processing subtask on the current block to be processed, so as to ensure the parallel processing of each block to be processed, thereby accelerating the processing speed of the block processing task between blocks to a certain extent. For example, the block processing task of each block to be processed is divided into n levels of processing sub-tasks, and each level of processing sub-tasks corresponds to at least one execution module. Referring to fig. 3a, fig. 3a shows an effect diagram of each level of processing subtask corresponding to one execution module, where each level of execution module is used to process the current level of processing subtask, so that after each level of execution module finishes executing the corresponding processing subtask, the block processing task of the block to be processed is represented to be finished. Fig. 3b is a schematic diagram showing the effect of each stage of execution module processing each block to be processed, and it can be seen from this figure that by configuring the execution modules, parallel processing of multiple blocks is achieved, and the block processing efficiency is improved as a whole.
Preferably, the last-stage execution module executes the last-stage processing sub-task on the current block to be processed and synchronizes the processing result to the database.
Specifically, in the present invention, after the last-stage execution module executes the last-stage processing sub-task on the current block to be processed, the processing results of the processing sub-tasks of each stage of the current block to be processed by each execution module are synchronized into the database.
It should be noted that, when processing the block processing task of the same block to be processed, each execution module in the present invention processes the block processing task according to the actual processing procedure sequence of the block processing task, that is, the divided multi-level processing subtasks are ordered, and after the first-level execution module finishes processing the first-level processing subtask, the second-level execution module can process the second-level processing subtask, and so on. Thus, the execution modules at all levels need to communicate to ensure that all the processing subtasks at all levels are executed.
Therefore, the invention provides that the execution modules communicate with each other in a task queue mode.
Optionally, after the first-stage execution module executes the current-stage processing subtask on the current block to be processed, the block corresponding to the task extracted according to the first-in first-out principle is used as the next block to be processed, and the tasks in the block task queue are written according to the receiving sequence of the blocks to be processed according to the first-in first-out principle.
Specifically, the present invention may also maintain a block task queue, and the computing device 10 stores the block after receiving the block broadcast by the other node, and establishes a block processing task for the block and writes the block processing task into the block task queue. In practical application, the blocks in the block chain generally store transaction records, and the transaction records are sequential, so that the block processing tasks are sequential. Based on this situation, after receiving the blocks to be processed, the present invention writes the block processing tasks established for the blocks to be processed into the block task queue according to the first-in first-out order, as shown in fig. 4. In this way, the first-stage execution module extracts the first-written block processing task to be processed from the block task queue according to the first-in first-out principle, and then executes the first-stage processing subtask in the block processing tasks. And after the processing is finished, extracting the block processing task of the next block to be processed from the block task queue, and executing a first-stage processing sub-task in the block processing task of the next block to be processed.
Optionally, the execution modules at each level communicate with each other according to the following method:
after each non-last-stage execution module executes the current-stage processing subtask on the current block to be processed, the method further comprises the following steps: adding block processing tasks of a current block to be processed into an intermediate task queue configured between the current block to be processed and a next-stage execution module according to a first-in first-out principle, wherein the intermediate task queue is configured between adjacent execution modules; and
after each non-first-stage execution module executes the current-stage processing subtask on the current block to be processed, the method further comprises the following steps: and taking the block corresponding to the task extracted according to the first-in first-out principle in the intermediate task queue configured between the upper-stage execution modules as the next block to be processed. If a certain subtask consists of a plurality of execution modules, each execution module in the level respectively acquires the next task from the intermediate task queue after processing the current block to be processed.
Specifically, referring to fig. 5, an intermediate task queue is maintained between each stage of execution modules, and each intermediate task queue is written into a task according to a first-in first-out principle.
Specifically, the non-last-stage execution module is taken as a second-stage execution module for illustration, and after the second-stage execution module finishes executing the second-stage processing sub-task of the current block to be processed, the second-stage execution module writes the block processing task of the current block to be processed into an intermediate task queue configured between the second-stage execution module and the third-stage execution module according to a first-in first-out principle. For example, the middle task queue in fig. 5 is arranged from left to right, block processing tasks are written from the left, and block processing tasks are read from the right, so that the second-stage execution module writes the block processing tasks of the current block to be processed into the middle task queue from the left.
In addition, referring to fig. 5, a non-first-stage execution module is taken as an example of a second execution module for description, after the second-stage execution module finishes executing a second-stage processing sub-task in a current block to be processed, the second-stage execution module further extracts a task from an intermediate task queue configured between the second-stage execution module and the first-stage execution module according to a first-in first-out principle, takes the extracted task as a block processing task of a next block to be processed, and executes the second-stage processing sub-task in the block processing task.
The block chain technology has the following characteristics: the blocks in a chain must be submitted in a strict order, the latter block must wait for the completion of the submission of the former block before being submitted, and the latter block must be linked to the former block to form an ordered chain. Although the invention improves the processing efficiency of the blocks by changing the serial processing block mode into the parallel processing mode to process the received blocks to be processed, the execution time point of the next block relative to the previous block is advanced to a certain extent, the execution time sequence between the blocks has uncertain difference at different nodes, and finally the problem of data inconsistency between different nodes may be caused, as shown in fig. 6a, the data D1 version is modified from v1 to v2 in block 1 in fig. 6a, while the data D1 version needs to be read in block 2, but at this time, the operation of modifying the data D1 version in block 1 is not synchronized into the database, and block 2 has read the data D1 version from the database, so that the data D1 read in block 2 is the v1, which causes the problem of data inconsistency, however, this phenomenon violates the principle of data consistency in the block system, in order to solve the problem, the present invention provides a solution to enable the execution timing sequence between the blocks to achieve the effect of fig. 6b by using the method provided by the present invention, and the present invention provides a method using token bucket to solve the above data inconsistency problem, that is:
before the first-stage execution module executes the current-stage processing subtask on the current block to be processed, the first-stage execution module further includes a flow shown in fig. 7, and includes the following steps:
and S21, the first-stage execution module applies for the current block to be processed for the token required by the block processing task according to the level of the data in the current block to be processed.
In this step, the data levels of the blocks in the block chain are different, some data may affect the execution of all subsequent blocks, and some may not affect the subsequent blocks. For example, the block transaction involves metadata modification or configuration modification, which affects the execution of all subsequent blocks, and the level is relatively high, and the corresponding block belongs to a special block. While data in some blocks generally has less influence on subsequent blocks and has lower levels, such blocks belong to common blocks.
It should be noted that, in the present invention, the level of data in a block may be preset, and a higher level indicates that the block has a higher influence on subsequent blocks.
In order to ensure the consistency among data, the invention provides that the first execution module applies for each block to be processed for a token required for executing the block processing task of the block to be processed. Specifically, the computing device 10 allocates a token bucket for the block chain on the node to which it belongs, where the token bucket has a plurality of tokens, so that when the first-stage execution module extracts the block processing task of the block to be processed from the block task queue, the first-stage execution module applies for the block to be processed for the tokens required for executing the block processing task of the block to be processed according to the level of data in the block to be processed.
S22, the first-stage execution module determines whether the number of applied tokens is greater than the number of tokens remained in the token bucket; if not, go to step S23; if yes, go to step S24.
Wherein the total number of tokens is the total number of configured execution modules.
Specifically, in the initialization stage, a token bucket with N tokens is initialized according to the number N of configured execution modules. The processing speed of each execution module can be controlled by setting the token bucket, so that the processing time among the blocks can be controlled to a certain extent, and the current block to be processed can be processed by utilizing the latest data processed by the previous block, so that the consistency of the data is also ensured.
And S23, the first execution module executes the first-level processing subtask on the current block to be processed.
S24, the first execution module suspends the execution of the current-level processing subtask and continues to execute step S22.
In steps S22 to S24, after applying for the token for the to-be-processed tile, the first execution module determines whether the number of tokens applied is greater than the number of tokens remaining in the token bucket, and if so, it indicates that the tokens in the token bucket have been previously processed, so that the first execution module temporarily does not process the first-level processing subtasks of the current to-be-processed tile, and the second-level to nth-level execution modules also do not process the second-level to nth-level processing subtasks of the to-be-processed tile. And if the first execution module judges that the number of the tokens applied by the current block to be processed is not more than the number of the remaining tokens in the token bucket, the first-stage processing subtask of the current block to be processed is started to be executed, and the tokens applied by the current block to be processed are removed from the token bucket. For example, if the first execution module needs to apply for 3 tokens for the current block to be processed, the number of the remaining tokens in the token bucket is 4, and the first execution module is easy to obtain that the number of the applied tokens 3 is less than the number of the remaining tokens in the token bucket 4, then the first-level processing subtask is executed on the current block to be processed, and 3 tokens are deducted from the token bucket, and the number of the remaining tokens in the token bucket is 1.
In practical application, when determining that the current block to be processed is a common block, the first execution module generally applies for 1 token for the block to be processed; and when the current block to be processed is determined to be the special block, applying for the total number of tokens N in the token bucket for the block to be processed. That is, when applying for 1 token for a common token, at most N common blocks can be executed concurrently; for a special chunk, N tokens may be applied for the special chunk to avoid data collision with other normal chunks.
By adopting the method of the token bucket, the number of the blocks which are executed concurrently can be flexibly controlled, the requirement of exclusive execution of the special blocks can be met, for example, when the transaction in the blocks relates to metadata modification, the influence of the data on the subsequent blocks is large, all the blocks in the token bucket can be applied for walking, and the subsequent blocks have no available blocks, so that the subsequent blocks are suspended from being processed, and the exclusive execution of the special blocks is realized.
In order to ensure the continuity of the parallel processing process of the blocks, the control method of the block processing task provided by the invention further comprises the following steps:
and after the last-stage execution module executes the last-stage processing subtask on the current block to be processed, releasing the token required by the block processing task applied by the current block to be processed and supplementing the token into a token bucket.
Specifically, after the last-stage execution module finishes processing the last-stage processing subtask of the current block to be processed, the token applied for the current block to be processed is released, and the released token is added into the token bucket, so that the first-stage execution module can apply for the token of the subsequent block to be processed, and the continuity of parallel processing is ensured.
It should be noted that, with the method of the token bucket, because the number of tokens applied for by the common chunks is small, if there are N tokens in the token bucket, when the first execution module determines that m (m is smaller than N) consecutive blocks to be processed are all common chunks and applies for 1 token for each of the common chunks, there is no case of suspending execution when the first execution module processes the m blocks to be processed, there may be data collision between the common chunks due to the same data item, and if the adjacent chunks having collision are forced to execute in series, the whole system degrades to the original serial execution mode in the presence of hot account data, to avoid this case, the present invention provides a solution, that is, a pre-commit cache manner is proposed, that is: for the current block to be processed, after modifying the data transacted in the current block to be processed, the execution module at any stage stores the modified data into the pre-submission cache, so that when the next block to be processed is processed, the modified data is firstly extracted from the pre-submission cache, the extracted modified data is utilized to process the next block to be processed, and otherwise, the data is extracted from the database to process the next block to be processed. It should be noted that the pre-commit cache proposed by the present invention may also be implemented separately, that is, the present invention may not implement the token bucket method shown in fig. 7, and may also ensure the consistency of data only by using the pre-commit cache method.
In order to better understand that the pre-submission caching method provided by the invention solves the problem of data inconsistency, the processing subtask including the account version verification subtask in the invention is taken as an example for explanation.
Preferably, the executing module for executing the account verification subtask executes the current-level processing subtask on the current block to be processed, and then continues to execute the current-level processing subtask on the next block to be processed, which specifically includes:
after the account version of the transaction related to the current block to be processed is verified, storing a write set consisting of all data modified in the transaction in the current block to be processed and the version number of the write set into a pre-submission cache; and
when the account version of the next block to be processed is verified, if the version number of the account stored in the pre-submission cache is determined, verifying the account version of the transaction related to the next block to be processed by using the version number of the account read from the pre-submission cache; otherwise, reading the version number of the account from the database to verify the account version of the transaction related to the next block to be processed.
Specifically, referring to fig. 8a, the present invention proposes to configure a pre-commit cache in the computing device 10, such that the execution module of the account version verification subtask stores the write set formed by all the modified data related to the transaction and the version numbers thereof into the pre-commit cache after the account version verification of the transaction related to the current block to be processed passes, and thus, in combination with fig. 8b, when the execution module of the account version verification subtask performs the account version verification on the transaction in the next block to be processed, only the version number of the account needs to be extracted from the pre-commit cache, and then the transaction in the next block to be processed is subjected to the account version verification by using the extracted version number, and since the data stored in the pre-commit cache is the newest, the newest data is also used when performing the account version verification on the next block to be processed, the consistency of data can be ensured. In addition, if no data is stored in the pre-submission cache, it is indicated that data modification is not performed on the transaction in the current block to be processed, and only data needs to be extracted from the database to perform account version verification on the transaction in the next block to be processed. Therefore, the consistency of data among the blocks is ensured by adopting a method of pre-submission caching.
Further, the processing subtask also includes an account update subtask; and the execution module for executing the account updating subtask executes the current-level processing subtask on the current block to be processed, and specifically comprises:
and after updating the write set and the version number thereof related to the current block to be processed stored in the pre-submission cache into the database, clearing the write set and the version number thereof related to the current block to be processed stored in the pre-submission cache.
Specifically, in practical application, the write set and the version number related to the transaction in the block need to be synchronized into the database, and since the write set and the version number are stored into the pre-submission cache first, the account updating subtask also needs to be executed in the present invention, and specifically, the write set and the version number related to the transaction in the current block to be processed cached in the pre-submission cache are updated into the database synchronously by the execution module corresponding to the account updating subtask, as shown in fig. 8 c; in addition, due to the limitation of the storage space in the pre-commit cache, the execution module corresponding to the account update subtask also needs to clear the write set and the version number thereof related to the current block to be processed stored in the pre-commit cache, so as to provide a storage space for the modified data related to the verification of the account version for processing the transaction in the next block to be processed.
In order to better understand the present invention, a submission task in which a block processing task is a block is taken as an example for explanation, the multi-stage processing subtasks of the task division submitted by the block include a block verification subtask, an account version verification subtask, a block submission subtask, an account update subtask, and a synchronization and termination subtask, and the sequence of processing the subtasks is as follows: the block verification subtask is processed first, then the account version verification subtask, the block submission subtask and the account update subtask are executed in sequence, and finally the synchronization and termination subtask is executed. Meanwhile, the first-stage execution module is used for processing the block verification subtask, the second-stage execution module is used for executing the account version verification subtask, the third-stage execution module is used for executing the block submission subtask, the fourth-stage execution module is used for executing the account updating subtask, and the last-stage execution module is used for executing the synchronization and ending subtask. With reference to fig. 1 to 8c, taking parallel processing of two blocks (block n and n +1, respectively) as an example for description, referring to fig. 9a, which is a schematic flow chart of a method for controlling a block submitting task according to an embodiment of the present invention, before introducing fig. 9a, for convenience of description, an intermediate task queue configured between a first-stage execution module and a second-stage execution module is denoted as a first intermediate task queue according to the present invention; recording an intermediate task queue configured between the second-stage execution module and the third-stage execution module as a second intermediate task queue; recording an intermediate task queue configured between the third-stage execution module and the fourth-stage execution module as a third intermediate task queue; and recording an intermediate task queue configured between the fourth-stage execution module and the fifth-stage execution module as a fourth-stage intermediate task queue. Fig. 9b is a schematic diagram showing the effect of the execution modules at each stage to execute the above five subtasks. Based on the above description, the method for controlling the task submission of the block provided by the present invention may include the following steps:
s31, the first-level execution module extracts the block processing task of the block n from the block task queue according to the first-in first-out principle, and applies for the block n for the token required by the block processing task according to the level of the data in the block n.
S32, if the first-level execution module determines that the number of applied tokens is not larger than the number of remaining tokens in the token bucket, the validity of the block n is verified.
In this step, the first-stage execution module is used for performing validity verification on signature information of each transaction in the block n, authority control of the transaction, whether the transaction meets an endorsement policy, and the like.
In addition, if the first-stage execution module determines that the number of applied tokens is greater than the number of remaining tokens in the token bucket, the current-stage processing subtask is suspended until the number of remaining tokens in the token bucket is determined to meet the number of applied tokens.
In specific implementation, when the first execution module determines that the number of tokens applied is greater than the number of tokens currently available in the token bucket, the first execution module suspends the current-stage processing subtask, namely suspends the processing of the current-stage processing subtask until the number of tokens remaining in the token bucket is determined to be not less than the number of tokens applied, and then executes the current-stage processing subtask.
S33, after the first-level execution module completes the validity verification of the block n, the block processing task of the n blocks is added to the first intermediate task queue.
S34, the first-level execution module continues to extract the block processing task of the block n +1 from the block task queue, and then continues to execute according to the steps S31-S33.
S35, after the second-level execution module extracts the task of the block n from the first intermediate task queue, the account version of the transaction related to the block n is verified, and after the verification is passed, the write set formed by all data modified in the transaction in the block n and the version number of the write set are stored in the pre-submission cache.
And S36, the second-stage execution module continuously adds the block processing task of the block n into the second intermediate task queue.
S37, the second-level execution continues to extract the task of the block n +1 from the first intermediate task queue, if the version number of the account is determined to be stored in the pre-submission cache, the version number of the account read from the pre-submission cache is used for verifying the account version of the transaction related to the block n +1, a write set formed by all data modified in the transaction in the block n +1 and the version number of the write set are stored in the pre-submission cache, and the execution continues according to the step S36.
Referring to fig. 10, by setting the pre-commit cache, the account data read from the pre-commit cache is the same when all nodes perform account version verification on the block n +1, and the consistency of data between nodes is ensured without sacrificing a concurrent processing mode.
It should be noted that the execution sequence of steps S36 and S37 may be executed simultaneously, or step S37 may be executed first and then step S36 may be executed, depending on the actual situation.
S38, after the third-stage execution module extracts the block processing task of block n from the second intermediate task queue, the third-stage execution module submits the transaction in block n and adds the block processing task of block n to the third intermediate task queue.
S39, the third-level execution module continues to extract the tile processing task of tile n +1, and then refers to step S38 to execute the tile submission sub-task for tile n + 1.
S310, the fourth-stage execution module extracts the block processing task of the block n from the third intermediate task queue, and updates the write set and the version number thereof related to the current block to be processed stored in the pre-submission cache to the database.
In step S310, referring to fig. 10, the fourth execution module updates the data in the pre-commit cache to the database.
S311, the fourth-stage execution module clears the write set and the version number thereof related to the current block to be processed stored in the pre-submission cache, and adds the block processing task of the block n to the fourth intermediate task queue.
S312, the fourth-level execution module continues to extract the block processing task of the block n +1 from the third intermediate task queue, and executes the account update subtask on the block n +1 according to steps S310 and S311.
S313, the fifth-stage execution module extracts the block processing task of the block n from the fourth intermediate task queue, and releases the token required for executing the block processing task applied for the block n and supplements the released token into the token bucket after the synchronization and termination subtasks are executed on the block n.
Specifically, for each block, the fifth execution module is generally configured to monitor whether the foregoing sub-task for processing the block is completed, and when the processing is completed, release the token applied for the block.
In practical applications, the block submission process may include other subtasks besides the five subtasks, and only the five subtasks are the most important subtasks. Therefore, for other subtasks, the present invention may also configure the execution module to process other subtasks in parallel, so that after the fifth-level execution module determines that the execution module for processing other subtasks has finished processing, and the four subtasks have also finished processing, the execution results of the execution modules of other subtasks are synchronized, and the token applied for the block is released. It should be noted that other subtasks in the present invention may be subtasks that are not limited by time, such as creating a block index, updating an account history, and the like.
It should be noted that, since the submission task of the block mainly includes 5 subtasks and is processed in parallel by five execution modules, a maximum of 5 blocks are allowed to be processed in parallel at the same time in this case.
Preferably, in order to better ensure the consistency of data, the present invention may also use other synchronization mechanisms, such as system lock and semaphore, to maintain data synchronization.
By adopting the control method of the block processing task provided by the invention, the block processing task is divided into continuous multi-stage processing subtasks, at least one execution module of a corresponding stage is configured for each stage of processing subtask, and each stage of execution module continues to execute the current stage of processing subtask on the next block to be processed after executing the current stage of processing subtask on the current block to be processed. Therefore, the parallel processing of the blocks is effectively realized, the processing efficiency of the blocks is improved, the transaction throughput rate of a block chain system is improved, and the performance requirements of more applications on the block chain are met.
Based on the same inventive concept, the embodiment of the present invention further provides a control device for block processing tasks, and since the principle of the device for solving the problem is similar to the control method for the block processing tasks, the implementation of the device can refer to the implementation of the method, and repeated details are omitted.
As shown in fig. 11, a schematic structural diagram of a control device for block processing tasks according to an embodiment of the present invention includes:
a splitting module 41, configured to split the block processing task into consecutive multi-level processing subtasks;
a configuration module 42, configured to configure at least one execution module of a corresponding level for each level of processing subtask;
at least one non-last-stage execution module 43i (i is not equal to n) for executing the current-stage processing sub-task on the current block to be processed and then continuing to execute the current-stage processing sub-task on the next block to be processed;
and a last-stage execution module 43n, configured to execute a last-stage processing sub-task on the current block to be processed, and synchronize the processing result into the database.
Preferably, the at least one non-last stage execution module comprises: the first-stage execution module 431:
the first-stage execution module is specifically configured to, after executing the current-stage processing subtask on the current block to be processed, take a block corresponding to the task extracted according to a first-in first-out principle as a next block to be processed, where the tasks in the block task queue are written in according to a receiving sequence of the blocks to be processed according to the first-in first-out principle.
Preferably, each non-last-stage execution module i (i is not equal to n), specifically configured to, after executing the current-stage processing subtask on the current block to be processed, further include: adding block processing tasks of a current block to be processed into an intermediate task queue configured between the current block to be processed and a next-stage execution module according to a first-in first-out principle, wherein the intermediate task queue is configured between adjacent execution modules;
each non-first-stage execution module 43i (i is not equal to 1), specifically configured to execute the current-stage processing subtask on the current block to be processed, further include: and taking the block corresponding to the task extracted according to the first-in first-out principle in the intermediate task queue configured between the upper-stage execution modules as the next block to be processed.
In a possible implementation manner, the first-stage execution module 431 is specifically configured to apply for a token required for executing a block processing task for a current block to be processed according to a level of data in the current block to be processed before executing the current-stage processing subtask on the current block to be processed; determining that the number of applied tokens is not more than the number of remaining tokens in a token bucket, wherein the total number of tokens is the total number of configured execution modules;
the first-stage execution module 431 is further configured to suspend the current-stage processing subtask until it is determined that the number of tokens remaining in the token bucket satisfies the number of tokens requested by the application, if it is determined that the number of tokens requested by the application is greater than the number of tokens remaining in the token bucket.
Preferably, the last-stage execution module 43n is specifically configured to release the tokens required for executing the block processing task applied for the current block to be processed and fill the tokens into the token bucket after the last-stage processing sub-task is executed on the current block to be processed.
In one possible embodiment, the present invention provides a control device for block processing tasks,
the processing subtask includes an account version verification subtask; then
An execution module 43i for executing the account verification subtask, specifically configured to store, after the account version verification of the transaction related to the current block to be processed passes, a write set and a version number thereof, which are formed by all data modified in the transaction in the current block to be processed, in a pre-submission cache; when the account version of the next block to be processed is verified, if the version number of the account is determined to be stored in the pre-submission cache, verifying the account version of the transaction related to the next block to be processed by using the version number of the account read from the pre-submission cache; otherwise, reading the version number of the account from the database to verify the account version of the transaction related to the next block to be processed.
In a possible implementation manner, in the control device for a block processing task provided by the present invention, the processing subtask further includes an account update subtask; and
the execution module 43i for executing the account update subtask is specifically configured to clear the write set and the version number thereof related to the current block to be processed stored in the pre-commit cache after updating the write set and the version number thereof related to the current block to be processed stored in the pre-commit cache into the database.
Preferably, the processing subtask further includes: the method comprises a block verification subtask, a block submission subtask, and a synchronization and termination subtask, wherein the sequence of processing the subtasks is as follows: a block validation subtask, an account version validation subtask, a block commit subtask, an account update subtask, and a synchronize and terminate subtask.
For convenience of description, the above parts are separately described as modules (or units) according to functional division. Of course, the functionality of the various modules (or units) may be implemented in the same or in multiple pieces of software or hardware in practicing the invention.
In some possible embodiments, various aspects of the control method for the tile processing task provided by the present invention may also be implemented in the form of a program product including program code for causing a computer device to execute the steps in the control method for the tile processing task according to various exemplary embodiments of the present invention described above in this specification when the program product runs on the computer device, for example, the computer device may execute the control flow of the tile processing task in steps S11 to S12 shown in fig. 2.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product of the control method for a block processing task of the embodiment of the present invention may employ a portable compact disc read only memory (CD-ROM) and include program codes, and may be run on a computing device. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device over any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., over the internet using an internet service provider).
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functions of two or more of the units described above may be embodied in one unit, according to embodiments of the invention. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Moreover, while the operations of the method of the invention are depicted in the drawings in a particular order, this does not require or imply that the operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A control method of block processing tasks is characterized in that the block processing tasks related to block establishment are divided into continuous multi-stage processing subtasks, the processing subtasks of all stages in the multi-stage processing subtasks are in parallel relation, at least one execution module corresponding to each stage is configured corresponding to each stage of the processing subtasks, intermediate task queues are configured between adjacent execution modules, a block task queue is configured on a first-stage execution module, and the tasks in the block task queue are written in according to the receiving sequence of blocks to be processed in a first-in first-out principle; and the method comprises:
after the first-stage execution module executes the current-stage processing subtask on the current block to be processed, taking a block corresponding to the task extracted from the block task queue according to a first-in first-out principle as a next block to be processed;
after each non-last-stage execution module executes the current-stage processing subtask on the current block to be processed, the block processing task of the current block to be processed is added into an intermediate task queue configured between the current block to be processed and a next-stage execution module according to a first-in first-out principle; and
after each non-first-stage execution module executes the current-stage processing subtask on the current block to be processed, taking a block corresponding to a task extracted according to a first-in first-out principle from an intermediate task queue configured between the non-first-stage execution modules as a next block to be processed; and the last-stage execution module executes the last-stage processing subtask on the current block to be processed and synchronizes the processing result to the database.
2. The method of claim 1, wherein:
before the first-stage execution module executes the current-stage processing subtask on the current block to be processed, the method further includes:
the first-stage execution module applies for a token required by executing a block processing task for the current block to be processed according to the grade of data in the current block to be processed;
the first-stage execution module determines that the number of applied tokens is not greater than the number of remaining tokens in the token bucket, and the total number of the tokens is the total number of the configured execution modules; and
and if the first-stage execution module determines that the number of the applied tokens is greater than the number of the remaining tokens in the token bucket, suspending the current-stage processing subtask until the number of the remaining tokens in the token bucket is determined to meet the number of the applied tokens.
3. The method of claim 2, further comprising:
and after the last-stage execution module executes the last-stage processing subtask on the current block to be processed, releasing the token required by the block processing task applied by the current block to be processed and supplementing the token into a token bucket.
4. The method of any of claims 1-3, wherein the processing subtasks include an account version verification subtask; the executing module for executing the account verification subtask executes the current-level processing subtask on the current block to be processed, which specifically includes:
and after the account version of the transaction related to the current block to be processed passes verification, storing a write set consisting of all data modified in the transaction in the current block to be processed and the version number of the write set into a pre-submission cache.
5. The method of claim 4, wherein after saving the write set of all data modified in the transaction in the current pending block and its version number into the pre-commit cache, further comprising:
when the account version of the next block to be processed is verified, if the version number of the account stored in the pre-submission cache is determined, verifying the account version of the transaction related to the next block to be processed by using the version number of the account read from the pre-submission cache; otherwise, reading the version number of the account from the database to verify the account version of the transaction related to the next block to be processed.
6. The method of claim 5, wherein the processing subtasks further include an account update subtask; and the execution module for executing the account updating subtask executes the current-level processing subtask on the current block to be processed, and specifically comprises:
and after updating the write set and the version number thereof related to the current block to be processed stored in the pre-submission cache into the database, clearing the write set and the version number thereof related to the current block to be processed stored in the pre-submission cache.
7. The method of claim 6, wherein the processing subtasks further comprise: the method comprises a block verification subtask, a block submission subtask, and a synchronization and termination subtask, wherein the sequence of processing the subtasks is as follows: a block validation subtask, an account version validation subtask, a block commit subtask, an account update subtask, and a synchronize and terminate subtask.
8. A control apparatus for a block processing task, comprising:
the system comprises a splitting module, a processing module and a processing module, wherein the splitting module is used for dividing block processing tasks related to block establishment into continuous multi-stage processing subtasks, and all stages of processing subtasks in the multi-stage processing subtasks are in parallel relation;
the configuration module is used for configuring at least one execution module of a corresponding level for each level of processing subtask, configuring an intermediate task queue between adjacent execution modules, and configuring a block task queue for a first level of execution module, wherein tasks in the block task queue are written in according to the receiving sequence of the blocks to be processed and the first-in first-out principle;
the first-stage execution module is used for executing the current-stage processing subtask on the current block to be processed, and then taking a block corresponding to the task extracted from the block task queue according to a first-in first-out principle as a next block to be processed;
each non-last-stage execution module is used for adding the block processing task of the current block to be processed into an intermediate task queue configured between the current block to be processed and the next-stage execution module according to the first-in first-out principle after executing the current-stage processing subtask on the current block to be processed; and
each non-first-stage execution module is used for executing the current-stage processing subtask on the current block to be processed, and then taking a block corresponding to a task extracted according to a first-in first-out principle from an intermediate task queue configured between the previous-stage execution modules as a next block to be processed; and the last-stage execution module is used for executing the last-stage processing subtask on the current block to be processed and synchronizing the processing result into the database.
9. A computer-readable medium, comprising:
stored computer-executable instructions for the method of any one of claims 1 to 7.
10. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 7.
CN201910711607.3A 2018-09-13 2018-09-13 Control method and device for block processing task Active CN110457123B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910711607.3A CN110457123B (en) 2018-09-13 2018-09-13 Control method and device for block processing task

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910711607.3A CN110457123B (en) 2018-09-13 2018-09-13 Control method and device for block processing task
CN201811069813.0A CN109271245B (en) 2018-09-13 2018-09-13 Control method and device for block processing task

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201811069813.0A Division CN109271245B (en) 2018-09-13 2018-09-13 Control method and device for block processing task

Publications (2)

Publication Number Publication Date
CN110457123A CN110457123A (en) 2019-11-15
CN110457123B true CN110457123B (en) 2021-06-15

Family

ID=65189376

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201811069813.0A Active CN109271245B (en) 2018-09-13 2018-09-13 Control method and device for block processing task
CN201910711607.3A Active CN110457123B (en) 2018-09-13 2018-09-13 Control method and device for block processing task

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201811069813.0A Active CN109271245B (en) 2018-09-13 2018-09-13 Control method and device for block processing task

Country Status (1)

Country Link
CN (2) CN109271245B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109769032A (en) * 2019-02-20 2019-05-17 西安电子科技大学 A kind of distributed computing method, system and computer equipment
CN109995613B (en) * 2019-03-29 2021-02-05 北京乐蜜科技有限责任公司 Flow calculation method and device
CN110189121B (en) * 2019-04-15 2021-04-09 创新先进技术有限公司 Data processing method and device, block chain client and block chain link point
US10999283B2 (en) 2019-04-15 2021-05-04 Advanced New Technologies Co., Ltd. Addressing transaction conflict in blockchain systems
CN110046166B (en) * 2019-04-22 2021-06-18 网易(杭州)网络有限公司 Timing task scheduling method and device based on block chain
SG11201910069YA (en) 2019-04-30 2019-11-28 Alibaba Group Holding Ltd Method and device for avoiding double-spending problem in read-write set-model-based blockchain technology
CN110245006B (en) * 2019-05-07 2023-05-02 深圳壹账通智能科技有限公司 Method, device, equipment and storage medium for processing block chain transaction
CN110704112B (en) * 2019-08-30 2021-04-02 创新先进技术有限公司 Method and apparatus for concurrently executing transactions in a blockchain
CN110781196A (en) * 2019-09-06 2020-02-11 深圳壹账通智能科技有限公司 Block chain transaction processing method and device, computer equipment and storage medium
CN110990157A (en) * 2019-12-09 2020-04-10 云南电网有限责任公司保山供电局 Wave recording master station communication transmission system and method adapting to micro-thread mechanism
CN111221639A (en) * 2020-01-09 2020-06-02 杭州趣链科技有限公司 Block pipeline execution method of block chain platform
CN112019350B (en) * 2020-08-31 2024-02-02 光大科技有限公司 Block verification method and device for block chain
CN112734338B (en) * 2021-01-15 2022-07-12 苏州浪潮智能科技有限公司 First-in first-out warehouse-out control method, system and medium
CN113157710B (en) * 2021-02-01 2022-08-09 苏宁金融科技(南京)有限公司 Block chain data parallel writing method and device, computer equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101046724A (en) * 2006-05-10 2007-10-03 华为技术有限公司 Dish interface processor and method of processing disk operation command
CN105528196A (en) * 2015-12-25 2016-04-27 大连陆海科技股份有限公司 Sea chart data processing and displaying system and method with multi-core assembly line work mode
CN106358003A (en) * 2016-08-31 2017-01-25 华中科技大学 Video analysis and accelerating method based on thread level flow line
US20170115976A1 (en) * 2015-10-23 2017-04-27 Oracle International Corporation Managing highly scalable continuous delivery pipelines
CN107402805A (en) * 2016-05-18 2017-11-28 中国科学院微电子研究所 A kind of way to play for time and system of multi-stage pipeline parallel computation
CN108011840A (en) * 2017-12-07 2018-05-08 中国银行股份有限公司 Control method, server and the system of transaction request

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107888514A (en) * 2017-11-17 2018-04-06 北京东土军悦科技有限公司 Message transmission method, message transfer device and electronic equipment in a kind of equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101046724A (en) * 2006-05-10 2007-10-03 华为技术有限公司 Dish interface processor and method of processing disk operation command
US20170115976A1 (en) * 2015-10-23 2017-04-27 Oracle International Corporation Managing highly scalable continuous delivery pipelines
CN105528196A (en) * 2015-12-25 2016-04-27 大连陆海科技股份有限公司 Sea chart data processing and displaying system and method with multi-core assembly line work mode
CN107402805A (en) * 2016-05-18 2017-11-28 中国科学院微电子研究所 A kind of way to play for time and system of multi-stage pipeline parallel computation
CN106358003A (en) * 2016-08-31 2017-01-25 华中科技大学 Video analysis and accelerating method based on thread level flow line
CN108011840A (en) * 2017-12-07 2018-05-08 中国银行股份有限公司 Control method, server and the system of transaction request

Also Published As

Publication number Publication date
CN110457123A (en) 2019-11-15
CN109271245B (en) 2021-04-27
CN109271245A (en) 2019-01-25

Similar Documents

Publication Publication Date Title
CN110457123B (en) Control method and device for block processing task
US11973869B2 (en) Maintaining blocks of a blockchain in a partitioned blockchain network
US11281644B2 (en) Blockchain logging of data from multiple systems
CN109274754B (en) Method, apparatus, and storage medium for synchronizing data in a blockchain network
CN107577427B (en) data migration method, device and storage medium for blockchain system
CN109493223B (en) Accounting method and device
US20230100223A1 (en) Transaction processing method and apparatus, computer device, and storage medium
CN102122289B (en) Dispatching conflicting data changes
US20180189373A1 (en) Log-based distributed transaction management
CN113168652B (en) Block chain transaction processing system and method
CN113743950B (en) Method, node and blockchain system for performing transactions in blockchain system
CN105022656A (en) Management method and device of virtual machine snapshot
CN104573064A (en) Data processing method under big-data environment
CN113157710B (en) Block chain data parallel writing method and device, computer equipment and storage medium
WO2022048358A1 (en) Data processing method and device, and storage medium
CN110599166A (en) Method and device for acquiring transaction dependency relationship in block chain
Memishi et al. Fault tolerance in MapReduce: A survey
CN114942847A (en) Method for executing transaction and block link point
CN110490742B (en) Transaction execution method and device in blockchain
Aksoy et al. Aegean: replication beyond the client-server model
CN113454597A (en) Block chain transaction processing system and method
CN113744062B (en) Method for performing transactions in a blockchain, blockchain node and blockchain
CN111507694A (en) Block chain cross-chain interaction method and system
CN112667593B (en) Method and device for ETL (extract transform and load) process to execute hbase fast loading
CN111143463B (en) Construction method and device of bank data warehouse based on topic model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40015608

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant