CN111324613B - Intra-fragment data organization and management method for alliance chain - Google Patents

Intra-fragment data organization and management method for alliance chain Download PDF

Info

Publication number
CN111324613B
CN111324613B CN202010176568.4A CN202010176568A CN111324613B CN 111324613 B CN111324613 B CN 111324613B CN 202010176568 A CN202010176568 A CN 202010176568A CN 111324613 B CN111324613 B CN 111324613B
Authority
CN
China
Prior art keywords
data
tree
mercker
fragment
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010176568.4A
Other languages
Chinese (zh)
Other versions
CN111324613A (en
Inventor
佟兴
戚晓冬
张召
金澈清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
East China Normal University
Original Assignee
East China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by East China Normal University filed Critical East China Normal University
Priority to CN202010176568.4A priority Critical patent/CN111324613B/en
Publication of CN111324613A publication Critical patent/CN111324613A/en
Application granted granted Critical
Publication of CN111324613B publication Critical patent/CN111324613B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • G06F16/2246Trees, e.g. B+trees
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/214Database migration support
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/219Managing data history or versioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • G06F16/2272Management thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor

Abstract

The invention provides a federation chain-oriented intra-fragment data organization and management method, which comprises the following steps: in the management of data organization in the fragments, the data in the fragments are managed in three layers, an aggregation layer generates abstracts of all account state data in the fragments, an index layer is responsible for indexing the account state data in the fragments, and a data layer manages historical data of the account state. The gathering Mercker B + tree supports generation of an integrity certificate, supports multi-version tracing of data, and can reduce the read amplification condition in the prior art; each time a block is generated, a snapshot is generated for the newly generated aggregated Mercker B + tree. And when the aggregated Mercker B + tree is updated, providing services to the outside according to the previously generated snapshot.

Description

Intra-fragment data organization and management method for alliance chain
Technical Field
The invention belongs to the technical field of block chains, relates to organization and management of data in fragments, and particularly relates to a management method for organizing data in fragments and a fragment adding method for alliance chain-oriented data organization.
Background
The blockchain is a non-falsifiable distributed ledger maintained by distrusted parties, and besides having the distributed characteristic, also supports Byzantine fault tolerance. However, the block chain system has low expandability in the aspects of calculation and storage, so that the block chain system cannot meet the requirements of an enterprise level, and the development of the block chain is greatly limited.
The fragmentation technique is considered as a solution to improve the scalability of the blockchain. In a blockchain based on an account model, a sharding technique mainly improves system throughput by dividing account data into several parts, each part is called a shard, and each shard is executed independently. After the shards are divided, the state data divided into each shard is currently mainly managed by the merkel tree or its variant in combination with a key-value pair database, such as an MPT tree in combination with an LSM tree-based key-value pair database. However, these structures, because they are implemented on key value stores, can present significant read amplification situations.
When fragmentation is added, a data repartitioning process occurs, and in the data migration synchronization process, a system cannot provide services to the outside, so that the system services are temporarily stopped, and the availability of the system is reduced.
As a data structure organized by account status data in the segments, the data structure can be used as an index structure of the account status in the segments, generate an abstract of the account status in the segments, support the historical version tracing of the account status data, and simultaneously satisfy the following conditions: the method ensures load balance among the fragments, rapid data migration during fragment division and provides non-stop service in the data migration process. In the prior art, the account state data organizational structure is typically organized in the form of a Mercker tree or a variant thereof, such as MPT, MBT, Mercker B + tree, and the like. None of these prior art techniques meets these needs.
Merkel Bucket Tree (MBT). MBT consists of two parts: hash tables and merkel trees. The hash table consists of a series of buckets, each containing a number of account states (the accounts are ordered within each bucket). The hash value of the bucket is a leaf node of the upper layer of the merkel tree, and the hash root of the merkel tree is a digest of all states. Since the MBT is a tree of fixed size, its height is fixed, and the nodes on the path from the root node to the bucket can be used as proof of integrity of the account state. However, MBT cannot provide integrity certification for individual account states (because there is a batch of account state data in one bucket).
The Merkel Patricia Tree (MPT) is a mixture of Patricia tree (Patricia trie) and merkel tree. The structure may, in addition to calculating the digest, also serve as an index of the account status and provide integrity certification. In addition, the MPT stores all versions of account state data by building a snapshot of the global account state on each chunk. However, the MPT is not a balanced tree, and its height may increase rapidly as the number of accounts grows, which may lead to overall performance degradation. Meanwhile, the structure of the MPT is realized on a key value storage area, and the condition of obvious read amplification is provided.
The merkel B + tree, a balanced structure with excellent I/O performance, can handle verifiable queries that provide integrity proofs. However, the Mercker B + tree does not support multi-version tracing of data. Thus, the above-described structures all have some drawbacks.
Disclosure of Invention
Aiming at the defects of the structures, the invention designs an aggregated Mercker B + tree (AMB-tree) for data management in a fragment, as shown in FIG. 12, wherein the aggregated Mercker B + tree has excellent read-write performance and can generate a summary of global account state data and provide integrity certification of a query; each fragment comprises a plurality of sub-fragments with discontinuous account addresses, when the fragments are increased, each fragment divides the aggregated Mercker B + tree, only one sub-fragment needs to be divided, the data migration amount when the fragments are increased is reduced, and meanwhile, the method ensures the load balance among the fragments when the fragments are divided; the aggregated Mercker B + tree supports fast rebuild and each shard can be serviced without downtime during data synchronization (migration).
The invention makes some improvements (AMB-tree) to the gathering Mercker B + tree, and sets a buffer area with fixed size for each node in the gathering layer Mercker tree of AMB-tree, and the update operation to data can be written into the buffer area first, and each buffer area is managed by the Mercker B + tree. The specific details are as follows: when the state data is updated, the updating operation is written into the buffer area of the root node, after the buffer area of the root node is full, the data of the buffer area is segmented and written into the buffer area of the child node after the segmentation, and so on, the buffered data can reach the corresponding leaf node, and finally the data in the leaf node is written into the index layer. The buffer area of each node is organized and managed through a Mercker B + tree, all the Mercker B + trees are maintained in a main memory, and when data needs to be searched, the data is searched in the buffer area; when the fragments need to be re-partitioned, if some update operations to accounts in the fragments exist in the cache region, the data needs to be written back first, and then the fragment partitioning operation is executed. By the optimization means, the writing efficiency is greatly improved, and meanwhile, according to the locality of data access, the reading efficiency is also improved by the optimization means. As shown in fig. 1.
Based on the above, the invention provides a federation chain-oriented intra-fragment data organization and management method, which comprises the following steps: in the management of data organization in the fragments, the data in the fragments are managed in three layers, an aggregation layer generates abstracts of all account state data in the fragments, an index layer is responsible for indexing the account state data in the fragments, and a data layer manages historical data of the account state. The gathering Mercker B + tree supports generation of an integrity certificate, supports multi-version tracing of data, and can reduce the read amplification condition in the prior art; each time a block is generated, a snapshot is generated for the newly generated aggregated Mercker B + tree. And when the aggregated Mercker B + tree is updated, providing services to the outside according to the previously generated snapshot. The method specifically comprises the following steps:
step 1: managing the account address state in the fragment by utilizing an aggregated Mercker B + tree organization; the Mercker B + tree comprises an index layer, an aggregation layer and a data layer; the method specifically comprises the following substeps:
step 1-1: each fragment is responsible for a part of address space of the whole address space, in order to ensure load balance among the fragments and data migration as little as possible when the fragments are added, each fragment is divided into a plurality of sub-fragments, the address space inside the sub-fragments is continuous, the address space among the sub-fragments is discontinuous, the account state in each sub-fragment is organized and managed by a Mercker B + tree, the tree is used as an index of the account state in the sub-fragment, and the part is used as an index layer for aggregating the Mercker B + tree;
step 1-2: taking root nodes of the Mercker B + trees of all sub-fragments in the fragment as leaf nodes, generating abstracts of all account states in the fragment through the Mercker trees, and taking the abstracts as an aggregation layer for aggregating the Mercker B + trees;
step 1-3: adding only with verifiableAdding a skip list to manage all historical data of a single account state, and taking the block number of a block where a transaction causing the account state change is located as one version of the account state; maintaining a plurality of layers of links for each account state historical version, wherein the links point to the previous version through a Hash pointer, the layer 0 links all version data, and the layer 0 link list is arranged at intervals of 2nThe versions of the trees are linked in turn into a linked list at the nth level, which serves as the data level for the aggregated Mercker B + tree.
Step 2: finishing state query operation with integrity certification and data multi-version tracing aiming at the gathering Mercker B + tree; the method specifically comprises the following substeps:
step 2-1: in the index layer, a hash pointer stored by each node on a path from the top end of the Mercker B + tree to a leaf node meeting a search condition is used as an integrity certificate of the index layer; the specific searching method is that starting from a root node of a sub-segment containing a search key, searching for the sub-nodes containing the search key, sequentially searching layer by layer until a leaf node is reached, and the hash values corresponding to all nodes on a search path are integrity proofs of an index layer;
step 2-2: in the aggregation layer, the brother nodes of all nodes on the path from the root node of the aggregation layer to the leaf nodes of the sub-fragments which point to meet the search condition are used as the integrity certification of the aggregation layer; the searching method comprises the following steps: starting from a root node of a sub-fragment where a search key is located (the root node is used as a leaf node in an aggregation layer), searching for a brother node corresponding to the root node, then searching for a brother node of a parent node of the root node, and so on until the root node, wherein all brother nodes on a path from the leaf node to the root node are integrity proofs of the aggregation layer;
step 2-3: in the data layer, when a specified account status version is searched, given an account and a block number, data corresponding to the maximum status version smaller than the block number should be returned; the searching method comprises the following steps: for a specified account, starting from a current state version, searching all versions with version numbers larger than or equal to a given block number, wherein if the minimum version is equal to the given block number, the minimum version is the requested version; if the minimum version is greater than the given block number, then recursively querying according to the above steps, where the recursive lookup is ended conditioned on: the block number is located between two adjacent version numbers; all state versions accessed during the recursion process are integrity proofs for the data layer.
And step 3: an update operation for the aggregated Mercker B + tree; the method specifically comprises the following substeps:
step 3-1: when a block is generated, a snapshot is generated for a new aggregated Mercker B + tree, and when the transaction in the next block is executed, the state data is not updated, and the service is provided for the outside according to the snapshot generated in the previous block. Copying all dirty nodes needing to be updated in the aggregated Mercker B + tree and updating the index layer and the aggregation layer;
step 3-2: when a transaction within a block is performed after the arrival of a new block, it involves updating the state data and thus the aggregated Mercker B + tree, copying and updating the nodes of the aggregated Mercker B + tree that need to be updated, and writing the updated nodes back into the aggregated Mercker B + tree after the nodes within the partition agree on the block.
Based on the above, the invention provides a fragment adding method without service halt when fragment adding, wherein the method involves two parts of serial adding and parallel adding. In the serialization fragmentation adding scheme, firstly, some information of the existing fragmentation is obtained, a plurality of hash keys are randomly generated, the fragmentation containing the hash keys is divided, newly divided data are synchronized into new fragmentation, and the new fragmentation organizes and arranges the data to construct a corresponding data structure; when the new fragment does not complete data synchronization, the node in the new fragment agrees with a certain transaction, and when the transaction is executed and the data which is not synchronized is required to be accessed or updated, the node in the new fragment requests the fragment where the original data is located for data related to the transaction to execute the transaction, and constructs the data structure of local data management by the data requested back; in the parallel increasing process, when the newly generated fragment a does not complete data synchronization, another new fragment b needs to perform the fragment splitting operation on the fragment a, aiming at the situation, the fragment a firstly requests the fragment where the original data is located for related data to execute the fragment splitting operation, and sends the splitting situation to the fragment b, and the fragment a and the fragment b respond to the data query and the data update request by requesting the data from the fragment where the original data is located according to the splitting situation. The specific technical scheme comprises the following steps:
step 1: serializing the addition shards:
step 1-1: acquiring all existing fragment information (part of account space and the like responsible for each fragment);
step 1-2: randomly generating a plurality of hash keys (the number of the hash keys is the same as that of the sub-regions), searching the fragments containing the hash keys, and segmenting the sub-regions of the fragments containing the hash keys by using the hash keys;
step 1-3: after data division, data is not immediately synchronized into a new fragment, but when nodes in the fragment are idle, the data starts to be synchronized;
step 1-4: after the data synchronization is completed, data of each sub-region is managed by using a Mercker B + tree, an abstract of fragment data is generated by using the Mercker tree (all sub-region data are aggregated by the Mercker tree to generate the abstract), and a new fragment provides services to the outside according to the data of the new fragment.
The segmentation method of the Mercker B + tree in the fragment subareas comprises the following steps:
step 1-2-1: given a merkel B + tree, and a particular hash key. And traversing downwards from the root node to the leaf node, and segmenting child nodes of the current node containing the hash key to generate two subtrees.
Step 1-2-2: adjusting the two subtrees, removing invalid nodes, and adjusting the invalid nodes into subtrees meeting requirements;
step 1-2-3: after adjustment, the two sub-trees after segmentation respectively replace the data of the original segment or are synchronized into a new segment according to the corresponding mapping rule.
Step 2: and (3) parallelizing and adding fragmentation:
step 2-1: a newly generated fragment S appears when the fragments are added in a parallelized manner1Another new slice S has not yet reached the time to synchronize back data from the other slices2It is necessary to pair the slices S1The case of performing segmentation. For this case, the processing steps are as follows: when the latest fragment S is generated2Requiring fragmentation S of previously newly generated but not yet synchronized data1When cutting, slicing S1Will first request the necessary data from the segment where the data is currently located and will depend on the segment S2The fragment request divides data, and informs the fragment S of relevant information after division2Separation of S1And fragment S2Each requesting synchronization data from the slice in which the data is present.
And step 3: non-stop service query processing in fragment adding (data synchronization) process
Step 3-1: when the new fragment does not complete data synchronization, if the node in the new fragment agrees with a certain transaction, when the transaction is executed and the data which is not synchronized needs to be accessed, the node in the new fragment requests the fragment where the original data is located for related data to execute the transaction, and constructs the data structure of local data management for the data which is requested back;
step 3-2: when the new fragment does not complete data synchronization, when a data updating request needs to be executed, a node in the new fragment requests related data from the fragment where the original data is located, and executes a data updating operation locally;
step 3-3: and if the new fragment does not receive the data query or the data updating request, the nodes in the new fragment start to synchronize data.
The beneficial effects of the invention include:
a novel data structure is designed in the sub-slice, namely an aggregation Mercker B + tree for data management in the sub-slice, and the structure not only supports integrity certification and multi-version data tracing, but also can reduce the read amplification condition existing in the prior art. And generating a snapshot for the newly generated aggregated Mercker B + tree every time a block is generated, and providing services for the outside according to the previously generated snapshot when the aggregated Mercker B + tree is updated, so that the services can be continuously provided for the outside. The invention ensures that the system service is not blocked by using an inertia synchronization mode in the fragmentation increasing process, and overcomes the problem that the data service is unavailable in a short time.
Drawings
Fig. 1 is a schematic diagram of AMB × tree.
Fig. 2 is the delay of the Get () function operating on state data from 0.1M (1M ═ 1,000,000) to 1.5M account number.
FIG. 3 is the delay of the Put () function operating on state data for account numbers from 0.1M to 1.5M.
Fig. 4 compares the delay of the last () operation of 4 data organization management structures on a data set having an account history status version of about 0.1M.
FIG. 5 is system throughput for different account numbers for all data organization management structures.
FIG. 6 shows all data organization management structures in the case of the read/write ratio of 1: 1 and 1: total delay in case 3.
Fig. 7 is a synchronization time of different data organization management structures in the case of different data synchronization amounts.
Fig. 8 is a delay of slicing implemented based on the Opr () function in the case of 3 data organization management structures where the number of blocks executed is varied from 1 to 8000.
FIG. 9 is a comparison of the time overhead of adding one slice to different data organization management structures.
Fig. 10 is the system throughput when a single slice is added to all data organization management structures at a data volume of about 2 GB.
Fig. 11 is the system throughput when two slices are added to all data organization management structures at a data volume of about 2 GB.
FIG. 12 is a schematic diagram of a data organization management method in a federation chain fragment oriented architecture according to the present invention;
FIG. 13 is an integrity certification process for generating aggregate layer correspondences.
FIG. 14 is a process for generating an integrity certification for an index layer correspondence.
FIG. 15 is a data layer linking historical state data of an account via a skip list.
FIG. 16 is a snapshot generation and update operation for an aggregated Mercker B + tree.
Fig. 17 is a pseudo-code illustration of a fractional serial addition approach.
FIGS. 18-1 to 18-4 are diagrams illustrating a segmentation operation of the segment sub-region Mercker B + tree using a hash key.
FIG. 19 is a data execution request process when fragmentation addition data does not complete synchronization.
FIG. 20 is a remote status data request process when adding tiles in parallel.
Detailed Description
The method is described in detail in connection with the following specific embodiments and the accompanying drawings. The overall process, conditions, experimental methods and the like for carrying out the method are all common knowledge in the art except for the contents specifically mentioned below, and the contents are not particularly limited in the present invention.
The invention aims to design an extensible intra-segment account state data storage management method, and provides an intra-segment data organization management method facing a alliance chain aiming at the defects of the prior art. In the management of the data organization in the fragments, the data organization in the fragments is managed by the proposed aggregation Mercker B + tree, and in the aggregation Mercker B + tree, the data organization in the fragments is managed in three layers: an aggregation layer, an index layer, and a data layer. The aggregation layer generates abstracts of all the account state data in the component, the index layer is responsible for indexing the account state data in the component, and the data layer manages historical data of the account state. The gathering Mercker B + tree supports generation of an integrity certificate, supports multi-version tracing of data, and can reduce the read amplification condition in the prior art; each time a block is generated, a snapshot is generated for the newly generated aggregated Mercker B + tree. And when the aggregated Mercker B + tree is updated, providing services to the outside according to the previously generated snapshot.
The invention provides a federation chain-oriented intra-fragment data organization and management method, which specifically comprises the following steps:
step 1: managing account address states within a shard with an aggregated Mercker B + tree organization:
step 1-1: each sub-slice is responsible for managing a plurality of different sub-slices, the account state in each sub-slice is organized and managed by a Mercker B + tree, the tree is used as an index of the account state in the sub-slice, and the part is used as an index layer of an aggregated Mercker B + tree;
step 1-2: taking the root node of the Mercker B + tree of each sub-fragment as a leaf node, generating abstracts of all account states in the fragment through the Mercker tree, and taking the abstracts as an aggregation layer for aggregating the Mercker B + tree;
step 1-3: the entire history of the single account state is managed with verifiable add-only skip lists. The block number of the block in which the transaction resulted in the change in account status was taken as a version of the account status. Maintaining a plurality of layers of links for each account state historical version, wherein the links point to the previous version through a Hash pointer, the layer 0 links all version data, and the layer 0 link list is arranged at intervals of 2nThe versions of the trees are sequentially linked into a linked list of the nth layer, and the linked list is used as a data layer of an aggregated Mercker B + tree;
step 2: and aiming at the aggregated Mercker B + tree, completing the state query operation with integrity certification:
step 2-1: in the index layer, a hash pointer stored by each node on a path from the top end of the Mercker B + tree to a leaf node meeting the search condition is used as an integrity certificate of the index layer; the specific searching method is that starting from the root node of the sub-segment containing the search key, the sub-nodes containing the search key are searched, the sub-nodes are sequentially searched layer by layer until the leaf nodes are searched, and the hash values corresponding to all the nodes on the search path are integrity proofs of the index layer
Step 2-2: in the aggregation layer, brother nodes of all nodes on a path from a root node of the aggregation layer to leaf nodes of the sub-fragments which point to meet the search condition are used as integrity proofs of the aggregation layer; the searching method comprises the following steps: starting from the root node of the sub-fragment where the search key is located, searching the corresponding brother node, wherein the root node is the root node of the index layer, the root node of the index layer is used as a leaf node of the aggregation layer, then searching the brother node of the father node of the index layer, and so on until the root node, and all the brother nodes on the path from the leaf node to the root node are integrity proofs of the aggregation layer.
Step 2-3: in the data layer, when searching for the designated account status version, given an account and a block number, data corresponding to the largest status version smaller than the block number should be returned. The searching method comprises the following steps: for a specified account, starting from a current state version, searching all versions with version numbers larger than or equal to a given block number, wherein if the minimum version is equal to the given block number, the minimum version is the requested version; if the minimum version is greater than the given block number, then recursively querying according to the above steps, where the recursive lookup is ended conditioned on: the block number is located between two adjacent version numbers. All state versions accessed during the recursion process are integrity proofs for the data layer.
And step 3: update operations for aggregated Mercker B + trees:
step 3-1: each time a block is generated, a snapshot is generated for the newly generated aggregated Mercker B + tree. And when the aggregated Mercker B + tree is updated, providing services to the outside according to the previously generated snapshot.
In the step 3-1, each time a block is generated, a snapshot is generated for a new aggregated merkel B + tree, and when a transaction in a next block is executed, the state data is not updated, and a service is provided to the outside according to the snapshot generated in the previous block; all dirty nodes in the aggregate Mercker B + tree that need updating are copied and updated in the index and aggregate layers.
Step 3-2: when updating aggregated merkel B + tree data, the write-back operation is not performed immediately, but when the verifier inside the slice agrees with the newly generated aggregated merkel B + tree.
In step 3-2, when a transaction within a block is performed after a new block arrives, the update of the state data is involved, and thus the update of the aggregated mercker B + tree is involved, the nodes in the aggregated mercker B + tree that need to be updated are copied and updated, and when the nodes within the tiles agree on the block, the updated nodes are written back to the aggregated mercker B + tree.
The invention aims to design a fragment increasing method without service shutdown, and provides a fragment increasing method without service shutdown when fragment increasing, aiming at the defects of the prior art, wherein the fragment increasing method involves two steps of serial increasing and parallel increasing. When the serialization fragments are increased, firstly, acquiring some information of the existing fragments, randomly generating a plurality of hash keys, dividing the fragments containing the hash keys, synchronizing newly divided data into new fragments, and organizing and sorting the data by the new fragments to construct corresponding data structures; when the new fragment does not complete data synchronization, but the node in the new fragment agrees with a certain transaction, when the transaction is executed, if the data which is not synchronized yet needs to be accessed or updated, the node in the new fragment requests the fragment in which the original data is located for data related to the original data to execute the transaction, and the data requested back is constructed into a local data management data structure; in the parallel fragmentation increasing process, when newly generated fragments a do not complete data synchronization yet, another new fragment b needs to perform fragmentation operation on the fragments a, aiming at the situation, the fragments a firstly request related data from the fragments where the original data are located to execute fragmentation operation and send the fragmentation condition to the fragments b, and the fragments a and the fragments b respond to data query and data update requests by requesting the data from the fragments where the original data are located according to the fragmentation condition.
The invention relates to a fragment adding method without service shutdown, which comprises the following specific steps:
step 1: serializing the incremental fragments;
step 2: adding fragmentation in a parallelization manner;
and step 3: the inquiry processing without stopping the service in the process of increasing the fragments (synchronizing the data);
the step 1 specifically comprises:
step 1-1: acquiring all existing fragment information (account address space and the like responsible for each fragment);
step 1-2: randomly generating a plurality of (the number of) hash keys which is the same as that of the sub-regions, searching the fragments containing the hash keys, and segmenting the sub-regions of the fragments containing the hash keys by using the hash keys;
step 1-3: after data division, data is not immediately synchronized into a new fragment, but when nodes in the fragment are idle, the data starts to be synchronized;
step 1-4: after the data synchronization is completed, data of each sub-region is managed by using a Mercker B + tree, an abstract of fragment data is generated by using the Mercker tree (all sub-region data are aggregated by the Mercker tree to generate the abstract), and a new fragment provides services to the outside according to the data of the new fragment.
The step 1-2 comprises:
step 1-2-1: given a merkel B + tree for a sub-region and the hash key of the generated response. Traversing downwards from the root node to the leaf node, and segmenting child nodes of the current node containing the hash key to generate two sub-trees;
step 1-2-2: adjusting the two subtrees, removing invalid nodes, and adjusting the invalid nodes into subtrees meeting requirements;
step 1-2-3: after adjustment, according to the mapping rule, the subtree with larger hash replaces the original tree, and the smaller hash subtree is divided into the newly generated fragments.
The steps 1 to 4 comprise:
step 1-4-1: acquiring a root node of a Mercker B + tree of each sub-region as a data item;
step 1-4-2: generating a hash value for each data item by using a hash function;
step 1-4-3: and splicing every two hash values in all the generated hash values to obtain a long character string, generating the long character string into the hash value by using the hash function again, and so on until a hash value is finally obtained, wherein the hash value is used as the abstract of all the data items.
In step 2, compared with the serialization increase fragmentation, a newly generated fragmentation S appears when the fragmentation is increased in a parallelization manner1Another new slice S has not yet reached the time to synchronize back data from the other slices2It is necessary to pair the slices S1The case of performing segmentation. For this case, the processing steps are as follows: when the latest fragment S is generated2Requiring fragmentation S of previously newly generated but not yet synchronized data1When cutting, slicing S1Will first according to the slice S2The fragment request requests necessary data from the fragment where the current data is located and divides the data, and after division, the related information is notified to the fragment S2Separation of S1And fragment S2Each requesting synchronization data from the slice in which the data is present.
The step 3 specifically comprises the following steps:
step 3-1: when the new fragment does not complete data synchronization, but the node in the new fragment agrees with a certain transaction, when the transaction is executed and the data which is not synchronized needs to be accessed, the node in the new fragment requests the fragment where the original data is located for related data to execute the transaction, and constructs the data structure of local data management for the data which is requested back;
step 3-2: when the new fragment does not complete data synchronization, when a data updating request needs to be executed, a node in the new fragment requests related data from the fragment where the original data is located, and executes a data updating operation locally;
step 3-3: and if the new fragment does not receive the data query or the data updating request, the nodes in the new fragment start to synchronize data.
Example 1
The embodiment is a method for organizing and managing data in a slice, which is implemented in a federation chain system.
FIG. 12 is a data structure aggregating Mercker B + trees for account organizational management within a shard, where the Mercker B + tree is used to manage state within sub-shards, and then aggregating multiple sub-shards (the account state data for each sub-shard is organized into a Mercker B + tree) from which all state digests in the shard are computed. Meanwhile, the verifiable only-added skip list supports multi-version state data tracing and generation of integrity certification. The aggregated merkel B + tree consists of three layers: an aggregation layer consisting of a merkel tree, an index layer consisting of a merkel B + tree, and a data layer consisting of a verifiable skip list.
Fig. 13 is a process of generating integrity certification corresponding to the aggregation layer, starting from the root node of the aggregation layer to the leaf nodes pointing to the sub-slices satisfying the search condition, the siblings of all nodes on the path being used as integrity certification of the aggregation layer, and node 3 and node 4 being integrity certification of data 2.
Fig. 14 is a process of generating an integrity certification corresponding to the index layer, starting from the top end of the mercker B + tree, to a leaf node that satisfies the search condition, where a hash pointer stored by each node on the path is used as an integrity certification of the index layer, and as shown in fig. 14, hash values corresponding to node 1, node 2, and node 3 are integrity certifications of the required account state data.
Fig. 15 is a table skip for managing account historical state data, as shown in the figure, the current state data has a version number of 8, when state data with a version number of 5 is searched, versions greater than 5 linked by version 8 have version 6 and version 7, the version 6 with the minimum version number is selected, the version 6 is directly linked to the version 5, and the version 5 is the request. While version 8, version 6 serves as an integrity certification for version 5.
FIG. 16 is a snapshot generation and update operation for an aggregated Mercker B + tree. During run time, the system must provide services to others based on the newly submitted aggregated Mercker B + tree. To avoid blocking the execution of transactions or queries by clients, the present invention generates a snapshot of the aggregated Mercker B + tree at each block. When updating the aggregated Mercker B + tree, the request will be responded to based on the previous aggregated Mercker B + tree. When a snapshot is generated on the aggregated Mercker B + tree, instead of overwriting in place, all dirty nodes in the aggregated Mercker B + tree that need updating are copied and updated in the index and aggregate levels, as shown in FIG. 16. Since the add-only property of the skip list can be verified, it is easy to append a new version of the state to the aggregated Mercker B + tree without modifying the snapshot at the data level. The new version state contains some forward links to its previous version. The new digest ra of the aggregated merkel B + tree is then recalculated to check for state consistency between nodes in the shard. When a node in a fragment agrees on a new digest ra, it starts to serve based on the new aggregated merkel B + tree. In addition, when nodes in a slice do not agree on a new block, the snapshot is used for rollback.
Example 2
The embodiment is a service non-stop slicing adding method implemented in a alliance chain system.
FIG. 17 is pseudo code for the serial add sharding process: first the new slice gets the information of all existing slices through the sync () function (lines 2 to 4): and (3) cyclic operation: synchronizing the data of each fragment Ci to a new fragment Cnew; then the new fragment selects a plurality of sub-regions (the number is the number of the sub-fragments in the fragment) based on the information of the existing fragment, in each selection process, the new fragment firstly generates a hash key HKs randomly by using a SplitKey () function (line 6), and in the fragment Csi(the tile containing hash key HKs) to find the sub-region (sub-tile) Z containing the hash key HKsi(line 7). The new slice is then HKs in zone Z through the Opr () functioniA split operation is deployed to generate two sub-regions, and the Opr () function returns a digest (line 8) of the data of sub-region I' to be split into the new fragment (the node in the original fragment signs the digest with BLS, ensuring that most nodes in the original fragment approve the digest). Thereafter, the new slice begins synchronizing back to data from the original slice (line 10). Thereafter, the new slice aggregates the data of the c sub-regions synchronized back (line 11), and finally, the new slice starts to process the transaction providing service (line 12).
FIGS. 18-1 through 18-4 illustrate the process of partitioning the Mercker B + tree using a hash: first a merkel B + tree and a particular hash key are given. Traversing from the root node to the leaf node, and splitting the child nodes of the current node containing the given hash key to generate two subtrees, as shown in fig. 18-2. The two subtrees are adjusted to remove invalid nodes and to be adjusted to sub-trees satisfying the requirements, as shown in fig. 18-3 and 18-4. The method comprises the following specific steps: a given Mercker B + tree is split into two local sub-trees, as shown in FIG. 18-2, node <70> is the split child node of the root node <30,90 >. The two local subtrees are then adjusted to be valid Mercker B + trees, respectively, as shown in FIGS. 18-3 and 18-4, respectively, and to adjust the left tree, the key 90 without the right subtree is first removed from the root <30,90> and the root node becomes the valid node <30 >. Key 70 is then removed from node <70> and merged into node <10,30> with its left sibling <10 >. Thereafter, the left local subtree becomes the valid merkel B + tree. The right local sub-tree is similarly adjusted. Finally, the right local subtree replaces the original mercker B + tree, the digest of this shard is recalculated, and the left local tree is transferred to the shard.
Fig. 19 is a case where transaction execution requires access to unsynchronized data in the fragment adding process, when a new fragment does not complete data synchronization yet, but a node in the new fragment agrees with a certain transaction, and when the transaction is executed and data that has not been synchronized yet needs to be accessed, the node in the new fragment requests related data from the fragment where the original data is located to execute the transaction, and constructs the data structure of local data management from the data that has been requested back. When the new fragment does not complete data synchronization, when a data updating request needs to be executed, a node in the new fragment requests related data from the fragment where the original data is located, and executes a data updating operation locally. And if the new fragment does not receive the data query or the data updating request, the nodes in the new fragment start to synchronize data.
When multiple slices join the system at the same time, the situation is more complicated than adding slices one by one. In particular, one slice may be partitioned in another new slice, while the other new slice has not completed data synchronization. E.g. a new slice Cn2Selecting belonging to another new slice Cn1And dividing it into two sub-regions ZxlAnd Zxr. However, Cn2Failure to synchronize data from Cn1Selected region Zx oflBecause of Cn1Not yet synchronized from another slice Cn0All states in the region Zx. In this case, the solution process of the scheme is as follows:
1) newly generated fragmentation Cn2Slicing Cn by means of the function Opr ()1The split operation is performed on hash key HKs on neutron zone Zx.
2) Upon receipt of a message from Cn2After the split operation request of (1), Cn1Splitting sub-region Zx into ZxlAnd ZxrThen ZxlIs sent back. To split Zx, slice Cn1Can slice Cn to0The nodes that request some of the necessary merkel B + trees contain paths from the root ra of the merkel B + tree for Zx to the leaves containing split hash key HKs, as shown in fig. 18-1 through fig. 18-4.
3) Finally, Cn1According to the area ZxrRebuild its aggregated Mercker B + tree and stop synchronizing ZxlThe data of (1). When slicing Cn2Receiving ZxlWhen it starts to synchronize directly from the slice Cn0And Cn1Zx oflThe data of (1).
The remote status data request process when adding slices in parallel is shown in fig. 20.
Example 3 (Experimental validation)
The embodiment is based on a prototype of a fragment type block chain system of an open source PBFT system, integrates an AMB-tree, an AMB-tree and a fragment increasing protocol together, and verifies the effectiveness of the invention design. The consensus algorithm in the sub-chip is realized by adopting PBFT in a HyperLegger Fabric 0.6 consensus module, and the data organization management in the sub-chip is carried out based on the aggregation Mercker B + tree. Each fragment is used as a cluster to run a PBFT algorithm for consensus, and cross-fragment transaction is processed in a two-stage submission-based mode. The code of the intelligent contract adopts Go language. At the network layer, each authentication node in the segment maintains a TCP connection with all authentication nodes so that all authentication nodes can communicate directly with each other. The hash function adopts a Keccak-256 algorithm, and the length of the hash value is 256 bits. The aggregated Mercker B + tree is implemented with approximately 3000 lines of C + + code.
The present invention provides three core APIs for applications (intelligent contracts) in design, including: put (key, val, blkto), Get (key, [ blkto ]) and Hist (key, s, t):
put () is used for writing val data whose account address is key into the system with tag block number blkn as version;
get () returns the state value of the key address at the blkNo position of the block, and returns the value of the key in the latest block by default;
hist () returns the tuple list (val, blkNo) of the account with the address of key.
Three functions involved in the slice addition process:
ruquest (Rs, Cr, args): the node Rs in the segment Cs requests some data from the segment Cr, wherein args is a parameter of the request, and the returned data is agreed by 2f +1 (f is the number of malicious nodes assumed in the PBFT and does not exceed 1/3 of the number of nodes) nodes of the Cr segment (realized by BLS signature);
sync (Cs, Cr, args): one fragment Cs synchronizes the data of the other fragment Cr through a function Sync (), and meanwhile, most of nodes in the Cs are ensured to obtain the same data through consensus, and the Sync () function is realized through a Request () function;
opr (Cs, Cr, args): the slice Cs deploys a specific operation on the slice Cr by the function Opr (). The Opr () function will be called by most nodes within the segment Cs. Meanwhile, if the function Opr () needs to return some agreed values of most nodes of the segment Cr to the segment Cs, it will be ensured that most nodes in the segment Cs will receive the returned data.
Evaluating the efficiency of basic read-write operations: fig. 2 to 4 show data access latency of three memory interfaces based on 4 structures: AMB-tree, MPT and MBT.
Fig. 2 and 3 show the delay of the state data operation for the Get () and Put () functions from 0.1M (1M ═ 1,000,000) to 1.5M account number. The fact proves that the performance of the AMB-tree and the AMB-tree is obviously superior to that of the MPT and the MBT. For example, in the case of an account address number of 0.8M, the latency of Get () and Put () of MBT exceeds 4 and 8 μ s, respectively, and is an order of magnitude slower than the latency of AMB tree because it uses LSM tree based key value storage, such as rockdb in MBT or LevelDB in MPT, while AMB-tree and AMB-tree perform well for Get () and Put () because they do not require secondary reads and writes due to their disk-oriented design and balanced structure (the index layer is designed as merkel B + tree). Furthermore, AMB-trees perform better than AMB-trees because of their optimization.
Fig. 4 compares the delay of the last () operation of 4 data organization management structures on a data set having about 0.1M account history status versions. In order for MBTs to support multiple versions, the version number of the state is appended to its account address to distinguish between the multiple versions; multiple versions of the account state in the MPT are traced back by looking up each version on all relevant snapshots. As a result, as shown in fig. 4, AMB-tree and AMB _ tree achieve lower delay for the hit () query because they can load successive versions in bulk (because the data layer is designed in a form that only skip lists can be added). In contrast, the delays for MPT and MBT of Hist () are proportional to the number of versions, and when tracking 30 versions, the delays for MPT and MBT are 42 μ s and 155 μ s, respectively.
Fig. 5 illustrates the system throughput for different account numbers for all data organization management structures. For MPT and MBT, as the number of accounts increases, the execution overhead of the transaction gradually becomes a bottleneck, and the throughput of the MBT-based system is about 2100, which is less than one-third of the throughput of AMB-tree and AMB × tree.
Fig. 6 shows that all data organization management structures have two read-write ratios of 1: 1 and 1: total delay in case 3. Each transaction may write a new account to the system, so the number of accounts increases as the transaction is performed. As the number of transactions grows, the delay of the AMB-tree and AMB-tree increases linearly with a small proportion, and MBT performs worst, because it needs to load all the states in the bucket and reinsert them as each state is updated.
Fragment increase protocol performance analysis: as shown in fig. 7-11.
The invention divides each slice into 32 sub-slices, the new slice uses the function Sync () to synchronize the data of each sub-slice (the merkel B + tree is newly divided from the other slices), the function Sync () involves multiple rounds of Request (), each round requests the merkel B + tree of one sub-slice (serialized into 2M network packets). For clarity, when the fragment data is managed by using the MPT and MBT methods, the account address space is divided by using the same fragment division method (consistent hash algorithm) as that used for aggregating the merkel B + tree, and each fragment also includes multiple sub-fragments, and multiple sub-fragments are also aggregated by using the merkel tree (the same aggregation layer as that used for aggregating the merkel B + tree), and the difference is that each sub-fragment is managed by using the MPT or MBT.
Fig. 7 shows synchronization times of different data organization management structures in case of different data synchronization amounts. The performance of the AMB-tree and AMB-tree far exceeds that of the MPT and MBT because the merkel B + tree that aggregates the merkel B + tree index layer supports batch reading, and the MPT and MBT store based on key values, which have a large number of random reading operations, and have severe read-write amplification.
Fig. 8 shows the delay of the slicing implemented based on the Opr () function in the case of 3 data organization management structures where the number of blocks executed varies from 1 to 8000. Because the MPT generates a snapshot of the global state in each block, splitting the MPT requires splitting all snapshots, the splitting delay of the MPT also increases with the number of blocks, and when 8000 blocks, the delay of the MPT exceeds 40 milliseconds; when the number of accounts is 1.5M, the waiting time of the MBT exceeds 30s (the time is too long and is not shown in the figure). The delays of AMB-tree and AMB x-tree are almost constant, e.g. 0.4 μ s is needed for splitting within 27 milliseconds.
Fig. 9 compares the time cost of adding one fragment to different data organization management structures, where an MPT-line and an MBT-line represent the result of dividing the MPT and the MBT in the account address space by a modular allocation method, because the modular allocation method needs to re-divide the global state when the fragments are increased or decreased, a large amount of data migration occurs, and the time cost of adding fragments is large. In the AMB-tree and the AMB-tree, because each fragment is divided into a plurality of sub-fragments with discontinuous account addresses, when the fragment increases, only one of the sub-fragments of the fragment to be split is split, and the amount of data to be migrated is not large, so that the delay of the fragment increase of the AMB-tree and the AMB-tree is small (about 1.7s), and the new fragment can be immediately provided after being added, and the data does not need to be waited to complete the synchronization. MPT and MBT take more time than AMB-tree because the newly added pieces of MPT and MBT need to synchronize account state data first. As the data set grows in size, the latency of MPT or MBT can grow rapidly because almost all account states remap to another slice, involving migration of large amounts of data. When the data size is 2GB, the new slice duration of the MPT exceeds 600 s.
Fig. 10 illustrates the throughput of system services when a single slice is added to all data organization management structures at a data volume of about 2 GB. For each test, a new section was added to the system at second 3. In AMB-tree and AMB x-tree the new fragment can start processing the transaction very quickly, whereas in MPT-and MBT-the service of the whole system stops for more than 500 seconds due to the global state that needs to be redistributed. Under MPT and MBT, only the newly sliced service stops 150s and 300s, respectively, due to the synchronization of state data. The throughput of AMB-tree and AMB-tree increases over time, because as time increases, more and more data is synchronized back to the new slice and can be processed locally without requiring data to be remotely requested. In addition, the performance of the AMB-tree is superior to the AMB-tree because the index layer can reduce write overhead and improve read performance because each node caches recently updated data in the aggregate layer. In addition, MPT-has higher throughput than MPT because the pattern distribution is more uniform than the consistent hash space partitioning and the load per slice is more balanced in experiments.
Fig. 11 shows the result of adding two tiles simultaneously, dividing two areas simultaneously. The addition of multiple slices does not have any significant side effect on throughput compared to adding slices serially, and furthermore, since there are two more slices, the data that each new slice needs to be synchronized will be reduced relative to adding one slice, so the synchronization time is shortened to about 6 s.
The protection of the present invention is not limited to the above embodiments. Variations and advantages that may occur to those skilled in the art may be incorporated into the invention without departing from the spirit and scope of the inventive concept, and the scope of the appended claims is intended to be protected.

Claims (7)

1. A federation chain-oriented intra-fragment data organization and management method is characterized by specifically comprising the following steps:
step 1: organizing and managing the account address state in the fragments by utilizing an aggregated Mercker B + tree, wherein the aggregated Mercker B + tree comprises an aggregation layer, an index layer and a data layer; the step 1 specifically comprises the following substeps:
step 1-1: each fragment is responsible for a part of address space of the whole address space, in order to ensure load balance among the fragments and data migration as little as possible when the fragments are added, each fragment is divided into a plurality of sub-fragments, the address space inside the sub-fragments is continuous, the address space among the sub-fragments is discontinuous, the account state in each sub-fragment is organized and managed by a Mercker B + tree, the tree is used as an index of the account state in the sub-fragment, and the part is used as an index layer for aggregating the Mercker B + tree;
step 1-2: taking root nodes of the Mercker B + trees of all sub-fragments in the fragment as leaf nodes, generating abstracts of all account states in the fragment through the Mercker trees, and taking the abstracts as an aggregation layer for aggregating the Mercker B + trees;
step 1-3: managing all historical data of a single account state by using a verifiable skip list which can only be added, and taking the block number of a block where a transaction causing the account state change is located as one version of the account state; maintaining a plurality of layers of links for each account state historical version, wherein the links point to the previous version through a Hash pointer, the layer 0 links all version data, and the layer 0 link list is arranged at intervals of 2nThe versions of the trees are sequentially linked into a linked list of the nth layer, and the linked list is used as a data layer of an aggregated Mercker B + tree;
step 2: finishing state query operation with integrity certification and data multi-version tracing aiming at an aggregation layer, an index layer and a data layer of the aggregated Mercker B + tree;
and step 3: and when the aggregated Mercker B + tree is updated, providing services to the outside according to the snapshot generated before updating.
2. A federation chain-oriented intra-fragment data organization and management method according to claim 1, wherein in the step 2, at the index layer, a hash pointer held by each node on a path from the top of the mercker B + tree to a leaf node satisfying a search condition is used as an integrity certification of the index layer; the searching method comprises the following steps: starting from the root node of the sub-segment containing the search key, searching the sub-nodes containing the search key, sequentially inquiring layer by layer until the leaf nodes are reached, wherein the hash values corresponding to all the nodes on the search path are integrity proofs of the index layer.
3. A federation chain-oriented intra-fragment data organization management method as claimed in claim 1 wherein in step 2, at the aggregation layer, sibling nodes of all nodes on the path from the aggregation layer root node to the leaf node of the sub-fragment that points to satisfy the lookup condition are taken as integrity proofs of the aggregation layer; the searching method comprises the following steps: starting from the root node of the sub-fragment where the search key is located, searching the corresponding brother node, wherein the root node is the root node of the index layer, the root node of the index layer is used as a leaf node of the aggregation layer, then searching the brother node of the father node of the index layer, and so on until the root node, and all the brother nodes on the path from the leaf node to the root node are integrity proofs of the aggregation layer.
4. The method for organizing and managing data in a federation chain-oriented segment according to claim 1, wherein in step 2, when a specified account status version is searched in the data layer, given an account and a block number, data corresponding to a maximum status version smaller than the block number should be returned; the searching method comprises the following steps: for a specified account, starting from a current state version, searching versions with version numbers larger than or equal to a given block number in all links of the current state version, and if the minimum version is equal to the given block number, determining the minimum version as a requested version; if the minimum version is greater than the given block number, then recursively querying according to the above steps, where the recursive lookup is ended conditioned on: the block number is located between two adjacent version numbers; all state versions accessed during the recursion process are integrity proofs for the data layer.
5. The federation chain-oriented intra-segment data organization and management method according to claim 1, wherein the step 3 specifically includes:
step 3-1: generating a snapshot of the newly generated aggregated Mercker B + tree for each block generated; when updating the aggregated Mercker B + tree, providing service to the outside according to the previously generated snapshot;
step 3-2: when updating aggregated Mercker B + tree data, a write back operation is performed when the verifier inside the slice agrees with the newly generated aggregated Mercker B + tree.
6. A federation chain-oriented intra-fragment data organization and management method as claimed in claim 5 wherein in step 3-1, each time a block is generated, a snapshot is generated for a new aggregated mercker B + tree, and when a transaction in the next block is executed, the state data is not updated, and a service is provided to the outside based on the snapshot generated in the previous block; all dirty nodes in the aggregate Mercker B + tree that need updating are copied and updated in the index and aggregate layers.
7. A federation chain-oriented intra-slice data organization management method as claimed in claim 5 wherein in step 3-2, when a new chunk arrives and a transaction within the chunk is performed, an update to state data is involved, which in turn involves an update to the aggregated mercker B + tree, the nodes in the aggregated mercker B + tree that need to be updated are copied and updated, and when a node within the chunk agrees to the chunk, the updated nodes are written back to the aggregated mercker B + tree.
CN202010176568.4A 2020-03-13 2020-03-13 Intra-fragment data organization and management method for alliance chain Active CN111324613B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010176568.4A CN111324613B (en) 2020-03-13 2020-03-13 Intra-fragment data organization and management method for alliance chain

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010176568.4A CN111324613B (en) 2020-03-13 2020-03-13 Intra-fragment data organization and management method for alliance chain

Publications (2)

Publication Number Publication Date
CN111324613A CN111324613A (en) 2020-06-23
CN111324613B true CN111324613B (en) 2021-03-26

Family

ID=71163797

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010176568.4A Active CN111324613B (en) 2020-03-13 2020-03-13 Intra-fragment data organization and management method for alliance chain

Country Status (1)

Country Link
CN (1) CN111324613B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111428275B (en) * 2020-03-13 2021-03-26 华东师范大学 Alliance chain-oriented service non-stop fragment increasing method
CN111782615B (en) * 2020-07-08 2021-05-18 杭州云链趣链数字科技有限公司 Block chain-based large file storage method and system and computer equipment
CN114175011A (en) * 2020-10-27 2022-03-11 支付宝(杭州)信息技术有限公司 Block chain system with efficient world state data structure
CN112100185B (en) * 2020-11-03 2021-04-30 深圳市穗彩科技开发有限公司 Indexing system and method for block chain data balance load
CN112269791B (en) * 2020-11-30 2024-04-05 上海特高信息技术有限公司 Block chain account book processing method
CN112579602B (en) * 2020-12-22 2023-06-09 杭州趣链科技有限公司 Multi-version data storage method, device, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8799247B2 (en) * 2011-02-11 2014-08-05 Purdue Research Foundation System and methods for ensuring integrity, authenticity, indemnity, and assured provenance for untrusted, outsourced, or cloud databases
CN109450638A (en) * 2018-10-23 2019-03-08 国科赛思(北京)科技有限公司 Electronic component data management system and method based on block chain
CN110334154A (en) * 2019-06-28 2019-10-15 阿里巴巴集团控股有限公司 Based on the classification storage method and device of block chain, electronic equipment
CN110800005A (en) * 2017-05-25 2020-02-14 甲骨文国际公司 Split, license, distributed ledger
CN110808838A (en) * 2019-10-24 2020-02-18 华东师范大学 Alliance chain-oriented fragmentation method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107562775B (en) * 2017-07-14 2020-04-24 创新先进技术有限公司 Data processing method and device based on block chain
EP3759865B1 (en) * 2018-02-27 2024-04-03 Visa International Service Association High-throughput data integrity via trusted computing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8799247B2 (en) * 2011-02-11 2014-08-05 Purdue Research Foundation System and methods for ensuring integrity, authenticity, indemnity, and assured provenance for untrusted, outsourced, or cloud databases
CN110800005A (en) * 2017-05-25 2020-02-14 甲骨文国际公司 Split, license, distributed ledger
CN109450638A (en) * 2018-10-23 2019-03-08 国科赛思(北京)科技有限公司 Electronic component data management system and method based on block chain
CN110334154A (en) * 2019-06-28 2019-10-15 阿里巴巴集团控股有限公司 Based on the classification storage method and device of block chain, electronic equipment
CN110808838A (en) * 2019-10-24 2020-02-18 华东师范大学 Alliance chain-oriented fragmentation method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SEBDB: Semantics Empowered BlockChain;Yanchao Zhu等;《2019 IEEE 35th International Conference on Data Engineering (ICDE)》;20190606;全文 *
区块链技术:架构及进展;邵奇峰等;《计算机学报》;20180531;全文 *

Also Published As

Publication number Publication date
CN111324613A (en) 2020-06-23

Similar Documents

Publication Publication Date Title
CN111324613B (en) Intra-fragment data organization and management method for alliance chain
JP7410181B2 (en) Hybrid indexing methods, systems, and programs
CN111338766B (en) Transaction processing method and device, computer equipment and storage medium
CN111428275B (en) Alliance chain-oriented service non-stop fragment increasing method
US10338853B2 (en) Media aware distributed data layout
US7418544B2 (en) Method and system for log structured relational database objects
US10002148B2 (en) Memory-aware joins based in a database cluster
US9767131B2 (en) Hierarchical tablespace space management
AU2002312508B2 (en) Storage system having partitioned migratable metadata
US9922075B2 (en) Scalable distributed transaction processing system
US11093459B2 (en) Parallel and efficient technique for building and maintaining a main memory, CSR-based graph index in an RDBMS
US20160026684A1 (en) Framework for volatile memory query execution in a multi node cluster
CN109271343B (en) Data merging method and device applied to key value storage system
Tang et al. Deferred lightweight indexing for log-structured key-value stores
US10862736B2 (en) Object counts persistence for object stores
Qi S-store: A scalable data store towards permissioned blockchain sharding
CN117616411A (en) Method and system for processing database transactions in a distributed online transaction processing (OLTP) database
JPWO2004036432A1 (en) Database accelerator
CN116541427B (en) Data query method, device, equipment and storage medium
US20220043799A1 (en) Method, device, and computer program product for metadata comparison
CN115114294A (en) Self-adaption method and device of database storage mode and computer equipment
US11599516B1 (en) Scalable metadata index for a time-series database
US11514080B1 (en) Cross domain transactions
US11593306B1 (en) File defragmentation service
Luo et al. DynaHash: Efficient Data Rebalancing in Apache AsterixDB

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant