CN114239025A - Data processing method and device based on block chain - Google Patents

Data processing method and device based on block chain Download PDF

Info

Publication number
CN114239025A
CN114239025A CN202111559912.9A CN202111559912A CN114239025A CN 114239025 A CN114239025 A CN 114239025A CN 202111559912 A CN202111559912 A CN 202111559912A CN 114239025 A CN114239025 A CN 114239025A
Authority
CN
China
Prior art keywords
data
information
knowledge proof
zero knowledge
zero
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111559912.9A
Other languages
Chinese (zh)
Inventor
冼祥斌
周禄
张开翔
范瑞彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN202111559912.9A priority Critical patent/CN114239025A/en
Publication of CN114239025A publication Critical patent/CN114239025A/en
Priority to PCT/CN2022/101733 priority patent/WO2023115873A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/64Protecting data integrity, e.g. using checksums, certificates or signatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/04Trading; Exchange, e.g. stocks, commodities, derivatives or currency exchange

Abstract

The application provides a data processing method and equipment based on a block chain, comprising the following steps: receiving a data modification request sent by a client, wherein the data modification request comprises a first input data identifier and operation information, processing the first input data identifier and the operation information to obtain a log domain data block and a member certificate of the log domain data block, performing zero-knowledge certificate processing on the member certificate of the log domain data block to obtain zero-knowledge certificate information and a first public parameter of the member certificate, generating a first transaction request according to the zero-knowledge certificate information and the first public parameter of the member certificate, and sending the first transaction request to the client, wherein the first transaction request is used for enabling the client to verify the zero-knowledge certificate information of the member certificate according to the first public parameter, and when the copied encrypted data is modified, the scheme does not need to perform the copying encryption process again, and only needs to pack the operation information into the log domain data block to be stored in the log domain, reducing data processing time.

Description

Data processing method and device based on block chain
Technical Field
The present application relates to the field of financial technology (Fintech), and in particular, to a data processing method and apparatus based on a block chain.
Background
With the development of computer technology, more and more technologies are applied in the financial field, and the traditional financial industry is gradually changing to financial technology.
Besides managing data on the blockchain through a consensus algorithm, each consensus node in the blockchain system is also used for providing data replication storage under the blockchain. Generally, when a user needs to modify the copied data stored in the consensus node, the user needs to reinitiate the copy encryption request, and re-copy and encrypt the modified data, which is long in time and occupies a large amount of computational resources.
Disclosure of Invention
An embodiment of the present application provides a data processing method based on a block chain, where the method is applied to a consensus node in the block chain, and the method includes:
receiving a data modification request sent by a client, wherein the data modification request comprises a first input data identifier and operation information;
processing the first input data identifier and the operation information to obtain a log domain data block and a member certificate of the log domain data block;
performing zero knowledge proof processing on member proofs of the log domain data blocks to obtain zero knowledge proof information and a first public parameter of the member proofs;
and generating a first transaction request according to the zero knowledge proof information proved by the member and the first public parameter, and sending the first transaction request to the client, wherein the first transaction request is used for enabling the client to verify the zero knowledge proof information proved by the member according to the first public parameter.
Another embodiment of the present application provides a data processing method based on a blockchain, where the method is used for a client, and the method includes:
acquiring a first input data identifier and operation information, and generating a data modification request according to the first input data identifier and the operation information;
sending a data modification request to a consensus node in the block chain system, wherein the data modification request is used for enabling the consensus node to process the first input data identifier and the operation information to obtain a log domain data block and a member certificate of the log domain data block, and performing zero knowledge certification processing on the member certificate of the log domain data block to obtain zero knowledge certification information and a first public parameter of the member certificate; generating a first transaction request according to the zero knowledge proof information proved by the member and the first public parameter;
and receiving a first transaction request sent by the consensus node, and verifying the zero-knowledge proof information of the member proof according to the first public parameter.
An embodiment of the present application provides a data processing method based on a block chain, where the method is applied to a proving node, and the method includes:
receiving zero knowledge certification information sent by each consensus node in a block chain system and state information of the consensus nodes when the zero knowledge certification information is generated;
performing recursive aggregation processing on each zero knowledge proof information and the state information of the consensus node when each zero knowledge proof information is generated to obtain root zero knowledge proof information and root state information;
generating a fourth transaction request according to the root zero knowledge proof information and the root state information;
sending a fourth transaction request to a consensus node in the blockchain system, wherein the fourth transaction request is used for enabling the blockchain system to store a root zero knowledge proof according to the root state information;
the zero knowledge proof information comprises any one or more of zero knowledge proof information of member proof, zero knowledge proof information of non-member proof and zero knowledge proof information of backup encrypted data.
An embodiment of the present application provides an electronic device, including: a processor, and a memory communicatively coupled to the processor; the memory stores computer-executable instructions;
the processor executes the computer-executable instructions stored in the memory to implement the blockchain-based data processing method described in the above embodiments.
An embodiment of the present application provides a computer-readable storage medium, in which computer-executable instructions are stored, and when the computer-executable instructions are executed by a processor, the computer-readable storage medium is configured to implement the data processing method based on the blockchain described in the foregoing embodiment.
The embodiment of the application provides a data processing method and device based on a block chain, wherein two storage areas, namely a data field and a log field, are arranged in a consensus node, the data field is used for storing backup encrypted data, the log field stores operation information of the backup encrypted data, and when a user needs to modify the backup copied data, the operation information is stored in the log field without executing a copy encryption algorithm, so that the data processing efficiency can be improved, and the resource utilization rate can be reduced. And then, verifying the member certificate of the log domain data block by using a zero knowledge proof algorithm to generate zero knowledge proof information, wherein the client only needs to verify the zero knowledge proof information, and the verification process is simplified. The accumulator is used for generating member proofs for the log domain data blocks in the log domain or generating non-member proofs when the log domain data blocks are deleted, and the member proofs or the non-member proofs are subjected to zero knowledge proof processing to obtain corresponding zero knowledge proof information, so that the verification process can be further simplified. The accumulator also supports the user to modify data at any time, and even if the user frequently updates the data, the member certification or the non-member certification can be efficiently made in a short time, and the function of data version control is realized.
In addition, when the client stores data to the consensus node, the original input data is subjected to multiple-length segmentation processing, the storage of data with any length of a user can be supported, zero knowledge proof processing is performed on the backup encrypted data, zero knowledge proof information of the backup encrypted data is obtained, the client does not need to verify the Mercker proof information, only needs to verify the zero knowledge proof information, and the verification process is simplified. The generated zero knowledge proof is recursively aggregated by using a return-to-zero knowledge proof algorithm, so that the number of proofs can be greatly reduced, the bandwidth pressure of a block chain is reduced, the verification calculation pressure is reduced, the expansibility of the system is increased, and the safety is not reduced.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
FIG. 1 is a block diagram of a system according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a zigzag inverse binary depth robust graph algorithm provided in an embodiment of the present application;
fig. 3 is a schematic flowchart of a data processing method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a storage area according to an embodiment of the present application;
fig. 5 is a schematic flowchart of a data processing method according to another embodiment of the present application;
FIG. 6 is a diagram illustrating a multi-layer binary depth robust mapping algorithm according to an embodiment of the present application;
fig. 7 is a schematic flowchart of a data processing method according to another embodiment of the present application;
FIG. 8 is a schematic flow chart diagram illustrating a data processing method according to another embodiment of the present application;
FIG. 9 is a schematic flow chart illustrating a data processing method according to yet another embodiment of the present application;
FIG. 10 is a schematic diagram of a recursive aggregation algorithm according to yet another embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The parameters used in the examples of the present application will be described first.
λ: safety parameter
t: each timing moment of discrete timer
St: set of elements of accumulator at time t
At: the value of the accumulator at time t, i.e. the product of all elements in the set
Figure BDA0003420198520000031
Member proof and non-member proof
pp: common parameter
And (3) upmsg: information for updating certificates
Setup(λ)→pp,A0: initializing generating common parameters
Add(At,x)→{At+1D, upmsg }: adding an element to update the accumulator
Del(At,x)→{At+1D, upmsg }: deleting an element and updating the accumulator
Figure BDA0003420198520000032
Generating membership attestations for x
Figure BDA0003420198520000033
Generating a non-member proof for x
Figure BDA0003420198520000034
Member attestation to update x
Figure BDA0003420198520000041
Updating non-member proofs of x
Figure BDA0003420198520000044
Verifying membership attestation of x
Figure BDA0003420198520000042
Validating non-member proofs of x
pp: common parameter
d: in-degree of nodes of a deep robust graph
D,Di: ith data block representing original data and original data respectively
R,Ri: data after copying and encrypting and ith data block
T: upper limit of challenge response delay time
G, G: group representing unknown order and generator of the group
H,Ht: respectively representing all log data and data update logs generated by users at the moment t
FID,CID,BID,τD: respectively representing the identity of the data D, the identity of the consensus node, the identity of the encrypted backup data R and the original label of the data block
l: length of challenge sequence number set
ld: number of layers of multilayer binary depth robust graph
lm: depth of the mercker tree
N: number of data blocks
aux: proving assistance data, comprising a Mercker hash tree of data R
[ l ]: represents a set of integers of length l {0, 1.., l-1}
a | | b: representing concatenation of two strings of a, b
Primes (λ): denotes a value of less than 2λPrime number set of
Hped(y) → x: pederson hash function, 32 bytes as a result
Hprimes(y) → x ∈ Primes (λ): mapping input y to a prime number x in a prime number set
DSetup(λ,T)→pp,A0: dynamic replication attestation mechanism initialization
DReplicate(BIDτDD) → R, aux: copy encryption of original fixed-length data D
DExtract(BID,τDR, aux) → D: decrypting the copied and encrypted data R to restore the original data D
Dpoll (n) → r: randomly choosing l challenge serial numbers from an integer set with the length of N, wherein r is (r)1,...,rl)
Figure BDA0003420198520000043
Generating a zero knowledge proof
DVerify(pp,π,BID,r,At) → 0, 1 }: verifying zero knowledge proofs
Zero knowledge proof algorithm: a zero knowledge proof algorithm can be used to proof and verify the following statements: given a result verification function F and a public input x, a secret input w is verified, which can satisfy F (x, w) ═ 1. The basic architecture of the zero-knowledge proof algorithm comprises three sub-algorithms:
setup sub-algorithm, whose formula is zk, is used to take function F and security parameter λ as inputs, and output a constant parameter set crs and back gate td. The invariant parameter set crs consists of two parts. Wherein the part used for certification is called the certification key crspThe part used for authentication is called the authentication key crsv
Prover algorithm, whose formula is zk.prover, which runs at the prover node for proving the secret key crspPublic of function FThe input x and the secret input w serve as inputs, and zero knowledge proof information pi is output.
A verification sub-algorithm, whose formula is ZK.Verifier, running at the verification node for verifying the secret key crsvCommon input x of function F and zero proof of knowledge information pi as inputs, output 0 means reject or output 1 means accept. Any device may act as a certifying node and run the authentication algorithm.
As shown in fig. 1, a blockchain storage system 100 provided by an embodiment of the present application includes a plurality of consensus nodes 10 and a plurality of clients 20, where the plurality of consensus nodes 10, in addition to managing data on a blockchain through a consensus algorithm, each of the consensus nodes 10 is further configured to provide data replication storage under the blockchain.
The data copy storage refers to that the client 20 requests to store d backup data of an input data with the size of N bits on the consensus node. At this time, the common node needs to provide a space corresponding to a multiple of the data size, that is, a space of d × N bits (Bytes). And cannot spoof the client with only N bits of storage space. A Proof of Replication (Proof of Replication, short for PoRep) mechanism belongs to one of data storage proofs, and is mainly applied to the data Replication storage scene. The PoRep mechanism combines the recoverability certification and the space mechanism, namely the PoRep mechanism has both recoverability and space guarantee.
For data copy storage, the general data processing procedure is as follows: after the client and the consensus node achieve storage transaction, the client transmits input data to the consensus node under a link, and the consensus node uses a ZigZag reverse binary Depth Robust Graph (ZigZag Depth-Robust-Graph, referred to as ZigZag DRGs for short) algorithm to perform copy encryption on the input data according to a backup requirement sent by the client to obtain an encrypted data block and make a copy certificate.
The bipartite graph is a graph in which a set of vertices can be partitioned into two mutually disjoint subsets, and two vertices to which each edge depends in the graph belong to the two mutually disjoint subsets, and the vertices in the two subsets are not adjacent. The ZigZag reverse binary Depth Robust Graph (ZigZag DRGs) is expanded based on the principles of Depth-Robust Graph (DRG for short) and bipartite Graph, and the operation process is shown in FIG. 2.
The zigzag inverse binary depth robust graph is a multi-layer graph structure, and each layer is an independent depth robust graph DRG. The consensus node encrypts input data of a user through a ZigZag DRGs algorithm to obtain encrypted data, if d backups need to be stored, the ZigZag DRGs algorithm is executed for d times on the input data to obtain a plurality of different encrypted data, and the encrypted data blocks are organized and managed by using the Mercker tree, namely, each encrypted data block is used as a leaf of the Mercker tree, and the root of the Mercker tree is submitted to a block chain to be used as a storage certificate. And randomly generating a sequence number column of the encrypted data blocks to be checked each time, and making the Mercker certification of the encrypted data blocks with corresponding sequence numbers one by the consensus node according to the sequence number column. In addition, in order to ensure that the consensus nodes always store the data of the users, each consensus node needs to make space-time proofs on all the stored data at intervals. The spatio-temporal proof is similar to the replication proof in that it only requires a continuous storage of the encrypted data block, and thus only includes the mercker proof for verifying the encrypted data block. However, the above data processing procedure has the following problems:
first, the data processing procedure cannot efficiently update data, and even if the data changes by one bit, the consensus node needs to re-execute the duplication encryption process, which consumes a lot of time and effort and frequently erases the storage hard disk.
Secondly, the verification node needs to directly verify the merkel proofs, if the number of the participated consensus nodes is increased and the number of the stored data is increased, the verification efficiency of the verification process is low, and the verification node needs to verify the relation between the input data and the backup encrypted data, namely the correctness of the encryption calculation. The verification node also needs to verify a spatio-temporal proof of the enormous amount of data. That is, the data processing procedure has the problem of poor scalability.
Thirdly, the data processing process requires the user to perform the storage transaction with a fixed data length, that is, the user data can only start to perform the copy encryption until reaching a certain length, for example, a sector with a capacity of 32G is full. Because the replication encryption algorithm splits the input data into quantitative and fixed-length blocks. If a user adds a small portion of data (much less than 32G) and cannot additionally perform duplicate encryption on the small portion of data, correct storage of the small portion of data cannot be guaranteed.
In order to solve the above problem, first, the present application improves a data storage structure, and two storage areas are provided in a consensus node: a data field and a log field. Wherein, the data domain is marked as R domain, and the log domain is marked as H domain. The R field is used to store backup encrypted data, i.e., data encrypted by copying the input data. The H domain is used for storing the log domain data block for modifying the backup encrypted data, merging the newly added data stored in the H domain with the input data after the newly added data reaches the encryption limit length, and then carrying out the duplication encryption processing again to obtain the new backup encrypted data. Through the arrangement, when the input data is modified, the duplication encryption algorithm does not need to be executed again, the data processing efficiency is improved, and the resource utilization rate is reduced. In addition, a cryptographic accumulator (hereinafter referred to as an accumulator) is adopted to generate a member certificate for each log domain data block in the log domain, and the accumulator can efficiently make the member certificate for the dynamically changed log data, so that the method can be suitable for a user entrusting the unchanged data stored for a long time, and can also meet the requirement of a user frequently updating the data.
Secondly, zero knowledge proof information is adopted for the replication proof, a circuit is uniformly constructed by the verification calculation of the Mercker proof and the verification calculation of the encryption correctness, and a zero knowledge proof is generated by the consensus node. The verifier or the user can verify the zero knowledge proof by using the public parameters, so that the verification of the encryption correctness of the input data and the Merkel proof of the encrypted data block can be realized, and the verification process is simplified. The public input of the zero-knowledge proof algorithm only contains public parameters and the identification of corresponding data, so that the data can not be revealed, and the reliability of verification is ensured.
Accordingly, the spatiotemporal proof only needs to generate zero knowledge proof information by adopting a zero knowledge proof algorithm for verification calculation of the merkel proof for verifying the encrypted data block and the member proof for verifying the log data block. Namely, each candidate data in the consensus node corresponds to one piece of zero-knowledge proof information, and the space-time proof is formed by the zero-knowledge proof information of the multiple candidate data in the consensus node. The consensus node can also recursively aggregate all zero knowledge proofs in the space-time proofs to obtain a root proof, and the root proof is stored in the block chain through a consensus algorithm, that is, the final space-time proof only contains the root proof, but the effectiveness is equivalent to that of zero knowledge proofs for verifying each backup encrypted data, and the verification process of the space-time proof is simplified.
Correspondingly, the recursive node 30 is provided in the present application, the recursive node 30 communicates with each consensus node in the blockchain system, the recursive node 30 collects zero knowledge proof information made by all the consensus nodes, then recursively aggregates the zero knowledge proof information to obtain a root proof, and stores the root proof to the blockchain through the consensus algorithm, thereby further simplifying the verification process.
Finally, when the input data is copied and encrypted, the input data with any length is supplemented by multiple times according to the encryption limit length, and then cutting and copying encryption processing are carried out to obtain backup encrypted data. It is possible to implement a duplicate encryption of any length of input data.
As shown in fig. 3, an embodiment of the present application provides a data processing method based on a block chain, where the data processing method is applied to a block chain system, and the data processing method specifically includes the following steps:
s101, the client side obtains the first input data identification and the operation information, and generates a data modification request according to the first input data identification and the operation information.
In the step, a data storage structure is described first, and the consensus node is provided with two storage areas, namely a data field and a log field. As shown in FIG. 4, the data domain is labeled as the R domain and the log domain is labeled as the H domain. If a client requests to back up d copies of data, because the result of each encrypted copy is different, the consensus node stores d pieces of backup encrypted data with the same size and different contents in the R domain, but the backup encrypted data share the log data in the same H domain. That is, the H field is only related to the Identity (File Identity, FID) of the input data, and the R field is related to the Identity (BID) of each backup encrypted data of the input data.
In the R field, backup encrypted data obtained by copying and encrypting the input data D is stored as individual data blocks, and the length of each data block is also 32 bytes because of the Perderson hash algorithm used. These chunks are managed by building a merkel (Merkle) hash tree and generate merkel proof information. In order to quickly prove the recoverability of the data later, the R field should also store all the node data of the whole Mercker tree. Or only part of the node data is stored and is calculated when the merkel proof information needs to be generated. Since the Merck tree is a complete tree, the number of node data of the Merck tree should be satisfied
Figure BDA0003420198520000071
In the H domain, a log domain data block in which the client operates on the backup encrypted data is stored, and the log domain data block is also stored in the form of a data block, but the length of the log domain data block is not limited. One data block is an operation set submitted by a client at each time, and comprises a data block sequence number of input data related to an operation and corresponding operation contents, such as: the content of the data block with the sequence number k is modified, the data block with the sequence number k is deleted, and a new data block is added. The log domain data block that was updated at the same time also contains the operation commit time.
The client acquires an input data identifier FID to be modified, marks the input data identifier FID as a first input data identifier, and acquires operation information of the input data D. The operation information includes modifying the content of the data block with sequence number k, deleting the data block with sequence number k, adding a new data block at the end, and adding a data block between sequence number k and sequence number k + 1. And generating a data modification request according to the first input data identifier and the operation information.
S102, the consensus node receives a data modification request sent by the client.
In this step, the consensus node may be any one of the consensus nodes in the block chain.
S103, the consensus node processes the first input data identifier and the operation information in the data modification request to obtain the log domain data block and the member certificate of the log domain data block.
In the step, the consensus node analyzes the first input data identifier and the operation information from the data modification request, obtains a log domain data block by packaging the first input data identifier and the operation information, and generates a member certificate of the log domain data block in a log domain (H domain). The member certificate is used for proving that the consensus node modifies the input data according to the operation information.
The accumulator is similar to the vector proof, and can make member proofs, i.e., one element belongs to one set, without binding elements and positional order, and can simultaneously prove that multiple elements belong to one set. The accumulator designed by Dan Boneh et al is used to modify the construction conditions of the original accumulator and a short non-interactive proof is used to greatly improve the verification efficiency. And non-member proof can be made, namely, one element does not belong to one set, so that the method is more flexible and simpler. The universal accumulators are all dynamic accumulators, and the proofs can be dynamically updated when elements are added or removed. Thus, an accumulator is employed to generate a membership attestation for each log domain data block within the log domain.
In one embodiment, the H field also stores a prime number vector S of the accumulator at time ttIs formed by mapping all log domain data blocks at time t into a prime number by using a prime number mapping function, wherein the prime number mapping function is marked as Hprimes. The scheme uses prime mapping functions designed by Fouque and Tibouchi to efficiently complete mapping in O (lambda) time, wherein O (lambda) represents the linear time of lambda.
Value A of the accumulatortIs the vector S of prime numbers for time ttThe value A of the accumulator obtained after processing the internal elementtSum prime number vector StFor generating membership certificates for respective log domain data blocks within the log domain.
The obtaining of the member certificate of the log domain data block specifically includes: and processing the elements of the accumulator at the first updating moment according to a member certification generating method of the accumulator to obtain member certification of the log domain data block.
For example: the common identification node packs the first input data identification and operation information into one or more log domain data blocks, then adds the log domain data blocks to the H domain, and updates the prime number vector of the accumulator, namely the accumulator updates the element set of the accumulator at the time t-1 to obtain a prime number vector S of the first updating time ttAnd based on the prime number vector S at the first update time ttObtaining the value A of the accumulator at the first updating momenttThen executing the method for generating member certification for the added log domain data block at the adding time t
Figure BDA0003420198520000081
Membership proofs are made for corresponding prime numbers of newly added log data blocks in accumulator prime number vector
Figure BDA0003420198520000082
S104, the consensus node performs zero-knowledge proof processing on the member proof of the log domain data block to obtain zero-knowledge proof information and a first public parameter of the member proof.
In this step, a Plonk-based zero-knowledge proof algorithm for verifying membership proofs of log domain data blocks is first constructed:
an initialization sub-algorithm with the expression DSetup (lambda, T) → pp, A0: the method is mainly used for initializing the zero knowledge proof algorithm and the accumulator. The security parameter lambda and the response delay parameter T are used as input, the global public parameter pp is output, and the initial element set A of the accumulator0. Where pp is (crs, G, l). G denotes a group of arbitrary order, G denotes a generator of the group of arbitrary order, and crs denotes a constant parameter set.
The security parameter lambda and the response delay parameter T are set according to requirements and stored in the blockchain, and all the consensus nodes and the recursion nodes can use the two parameters to execute an initialization sub-algorithm to obtain the same global common parameter pp and an initial value of the accumulator.
The proof sub-algorithm, whose expression is,
Figure BDA0003420198520000083
or piu: zero knowledge proof information pi for generating membership proof of log domain data blockwOr zero knowledge proof information pi of non-member proofu. The verifying node needs to prove the correctness of the member certificate or the non-member certificate of the log domain data block, and the verifying of the correctness of the member certificate or the non-member certificate is also a calculation process, and can be calculated by using a certificate sub-algorithm in a zero-knowledge certificate algorithm. When the member certificate or the non-member certificate of the log domain data block in the H domain needs to be verified, the member certificate or the non-member certificate of the log domain data block needs to be used as secret input to generate zero-knowledge certificate information of the member certificate or the non-member certificate.
The correctness verification process for the member certification of the accumulator is shown in formula (1) and formula (2), and the correctness verification process for the non-member certification of the accumulator is shown in formula (3) and formula (24):
Figure BDA0003420198520000084
Figure BDA0003420198520000085
Figure BDA0003420198520000086
Figure BDA0003420198520000087
Figure BDA0003420198520000088
wherein, ": "is to be taken in an illustrative sense,
Figure BDA0003420198520000089
representing the vector S of the log-prime numberstWherein all members except the member x needing to be clear multiply, the right side of the equation in equation (1) is to exponentiate the generator G of the group G, equation (1) represents that the multiplication result corresponds to the elements in the group G, namely, to the member certificate, equation (2) represents that whether the exponentiation operation result on the member certificate is the element vector of the accumulator, and equation (3) represents that the non-member certificate is the element vector of the accumulator
Figure BDA0003420198520000091
From a and gbThe composition is that the Bezout function is the Bezier theorem, two values of a and b are obtained through a Bezier formula, and the formula (4) represents judgment At agbxWhether the calculation result of (2) is the generator G of the group G.
A verify sub-algorithm expressed as DVerify (pp, π)w,At) And the verification sub-algorithm is used for verifying the correctness of the zero knowledge proof information verified by the member, if the verification sub-algorithm outputs 0, the zero knowledge proof information is rejected, and if the verification sub-algorithm outputs 1, the zero knowledge proof information is received.
The member attestation of the log domain data block is attested using an attestation sub-algorithm in the zero knowledge attestation algorithm described above to obtain zero knowledge attestation information and a first common parameter for the member attestation. The first public parameter is used for verifying the zero-knowledge proof information certified by the member by the verifier, and the prover is used for executing a proof sub-algorithm in the zero-knowledge proof algorithm to output the zero-knowledge proof information certified by the member and the first public parameter.
In one embodiment, when the member certification of the log domain data block is obtained by executing the member certification algorithm in the accumulator, the member certification of the log domain data block, the set of elements of the accumulator at the first update time, and the value of the accumulator at the first update time are processed according to the global common parameters and the certification algorithm to obtain zero knowledge certification information and the first common parameters of the member certification.
For example: member certification is made on the corresponding prime numbers of the newly added log data blocks in the accumulator prime number vector by the consensus node
Figure BDA0003420198520000092
Then, the algorithm is executed
Figure BDA0003420198520000093
Making zero knowledge proof of correctness of member proofw. Where pp denotes a global common parameter, StA prime number vector, A, representing the first update time ttRepresenting the value of the accumulator at the first update time, piwRepresenting a membership attestation of the log domain data block.
And S105, the consensus node generates a first transaction request according to the zero-knowledge proof information proved by the member and the first public parameter.
In this step, the first common parameter is the value a of the accumulator at the first update timetThe consensus node verifies the zero knowledge verification information pi of the memberwAnd a first common parameter AtAnd packaging the transaction content into transaction content and sending the transaction content to the client.
And S106, the node is identified and sends a first transaction request to the client.
S107, the client verifies the zero-knowledge proof information of the member proof according to the first public parameter.
In this step, the client extracts the member-certified zero-knowledge-certification information from the first transaction requestwAnd a first common parameter AtThen the verify sub-algorithm DVerify (pp π pi) is performedw,At) And determining a verification result according to the output result of the verification sub-algorithm.
In the technical scheme, the data domain and the log domain are arranged in the consensus node, the log domain is used for storing the log domain data block for modifying the backup encryption data, and through the arrangement, when the input data is modified, the duplicate encryption algorithm does not need to be executed again, so that the data processing efficiency is improved, and the resource utilization rate is reduced. In addition, the accumulator is adopted to generate the member certification for each log domain data block in the log domain, and the accumulator can efficiently make the member certification for the dynamically changed log data, so that the accumulator can be suitable for a user entrusting the data which is not changed for long-term storage, and can meet the requirement of the user frequently updating the data, the application range is wide, the member certification of each element in the accumulator is subjected to zero knowledge certification processing to obtain the zero knowledge certification information of the member certification, and the client only needs to verify the zero knowledge certification information of the member certification, so that the verification of the encrypted storage of the log domain data block can be realized, and the verification process is simplified.
As shown in fig. 5, an embodiment of the present application provides a data processing method based on a block chain, where the data processing method is applied to a block chain system, and the data processing method specifically includes the following steps:
s201, the client acquires input data and generates a data storage request according to the input data.
In this step, as one implementation manner, after the user has made a data storage transaction with a certain consensus node, the client acquires input data and generates a data storage request according to the input data. As another implementation, when the newly added data in the log domain data block in the log domain reaches the length of the copy encryption boundary, the newly added data and the input data corresponding to the log domain data block are merged to generate updated input data. Through the arrangement, after the modification operation of the user on the input data is converted into the log domain data block and stored in the log domain, the copy encryption algorithm is executed again on the updated input data through the mechanism for updating the input data.
S202, the consensus node receives a data storage request sent by the client.
And S203, the consensus node performs double-length supplementary processing on the input data according to the preset copying encryption limit length to obtain processed input data, and performs segmentation processing on the processed input data to obtain a plurality of data blocks.
In this step, after receiving the data storage request, the consensus node parses the input data from the data storage request, generates an identifier FID of the input data, and uploads the identifier FID of the input data to the block chain. The consensus node is also used for performing double-length supplementary processing on the input data according to the preset copying encryption limit length to obtain the processed input data. The double-length complementing process is used for enabling the length of the processed input data to be multiple of the length of the copying encryption boundary, and complementing data blocks are invalid data.
After the double-length complementary processing is performed on the input data, the processed input data is subjected to a division processing to obtain a plurality of data blocks.
Assuming that the length of the input data D is 31G and the length of the duplication encryption boundary is 32G, 0bit is added so that the overall length reaches an integral multiple of the length of the duplication encryption boundary, that is, the length of the input data D reaches 32G. The input data is divided into 210A data block of length 32 bytes.
S204, the consensus node performs copying encryption processing on the plurality of data blocks to obtain backup encrypted data and proving auxiliary data corresponding to the backup encrypted data.
In the step, the consensus node uses a copy encryption algorithm to perform copy encryption processing on the plurality of data blocks, and obtains backup encrypted data and certification auxiliary data corresponding to the backup encrypted data. Wherein the proof auxiliary data is used to generate zero-knowledge proof information of the correctness of the encryption process and of the merckel proof of the encrypted data block.
The multi-layer binary Depth Robust Graph (Stacked Depth-Robust-Graph, abbreviated as Stacked DRGs) is also expanded based on the Depth Robust Graph and the bipartite Graph, as shown in fig. 4. Each layer of the multi-layer two-division depth robust graph is a depth robust graph, and the relation between nodes of each layer has two-division expansion dependence among layers besides the dependence of the depth robust graph. Only the last layer in the multi-layer binary-depth robust graph is used for encrypting original data, the calculation processes of the previous layers are used for obtaining label data required by encryption of each node of the last layer, and the calculation of the label data also follows the dependency relationship between the nodes. And finally, encrypting the label data correspondingly depended by each node together with the original data of the node.
In one embodiment, when a plurality of data blocks are subjected to copy encryption processing, a multilayer binary depth robust graph algorithm is used for carrying out initial label data tau of each data blockDProcessing to obtain each data block at ld-label data of layer 1, at l, according to each data blockdThe label data of the layer 1 encrypts each data block to obtain backup encrypted data, and uses the merkel proof of the backup encrypted data as the proof auxiliary data. Wherein ldThe total number of layers of the multilayer two-half depth robust graph is shown.
For example: 2 obtained by dividing in S20310Taking 32-byte data blocks as nodes of the last layer of the multi-layer binary-depth robust graph, wherein the nodes of the first layer are initial labels tau of each data blockDFront face ldThe layer-1 calculates the value of the node of each layer as the label data according to the dependency relationship of the nodes, and the length of the label data of each node is also 32 bytes, so that the number of the nodes of each layer can be kept unchanged, and the input data D after the segmentation processing is encrypted at the last layer.
In the technical scheme, data D with any length is supplemented in a double-length mode, a copy encryption algorithm is executed after the data D is cut according to the length of a copy encryption boundary, due to the fact that a non-zigzag reverse binary depth robust graph is adopted, each layer needs to encrypt the data, the encryption process is complex, a verifiable delay function is used as a data encryption function, and the whole copy encryption time is very long. If a malicious person can find a fast copy encryption method, the security of the whole mechanism will be reduced. Therefore, the scheme adopts the multilayer binary-depth robust graph, does not need to encrypt data on each layer, and can shorten the encryption time. In addition, all the label data need to be calculated layer by layer, and then the data of the last layer is encrypted or decrypted, so that parallel calculation cannot be performed, and the data can be calculated node by node from top to bottom and from left to right, and the reliability of the data encryption process is high.
S205, the consensus node performs zero knowledge proof processing on the backup encrypted data according to the proof auxiliary data to obtain zero knowledge proof information and a second public parameter of the backup encrypted data.
In this step, a Plonk-based zero-knowledge proof algorithm for verifying backup encrypted data is constructed:
initializing a sub-algorithm with the expression DSetup (lambda, T) → ppA0: the method is mainly used for initializing the zero knowledge proof algorithm, initializing the copy encryption algorithm and initializing the accumulator. Taking a safety parameter lambda and a response delay parameter T as input, outputting a global common parameter pp and an initial element set A of an accumulator0. Where pp is (crs, G, l). G denotes a group of unknown order, G denotes a generator of the group of unknown order, and crs denotes a constant parameter set. The response delay parameter T is used for setting the number of layers l of the multilayer binary depth robust graphdAnd each parameter of each layer of depth robust graph, the number of layers of the multilayer binary depth robust graph and each parameter of each layer of depth robust graph are subjected to image copying encryption calculation time, so that the setting of copying encryption calculation time is realized.
Duplicate ciphering sub-algorithm, expressed as DReplicate (BID, τ)DD) → R, aux, constructed based on a multi-layer binary depth robust graph, and used for performing copy encryption on the input data D, so as to identify BID and an initial label tau of the divided data block according to the input data D, the identity of each backup encrypted dataDFor inputting, one backup encrypted data R and the corresponding certification auxiliary data aux are generated each time the backup is performed, and if the backup is required for many times, the duplication encryption algorithm is repeatedly performed with the identification BID of different backup encrypted data.
As shown in fig. 6, the copy encryption algorithm is described in detail below. Firstly, calculating the identity of the backup encrypted data according to a formula (5):
BID=Hped(FID||CID||n) (5)
BID represents the identity of backup encrypted data, n represents the number of backup encrypted data, n is an integer, FID represents the identity of input data, CID represents the identity of consensus node, a | | | b represents the splicing of two character strings of a and b, Hped(y) performing Pederson hash function on y, and outputting 32 wordsAnd (4) saving.
Calculating the label data of each node layer by layer from the second layer, and finding out the father node of the ith node according to the depth robust graph aiming at a certain node i in any layer, wherein the specific calculation process is represented by a formula (6):
Parent(N,i,d)→(v1,v2,...,vd) (6)
parent (N, i, d) is used for finding out a dependent node, namely a Parent node, of the ith node according to the depth robustness graph, wherein the dependent node comprises the Parent node at the same layer and the Parent node at the upper layer, v1,v2,...,vdTag data representing a parent node of node i.
And then, performing hash calculation after splicing the parent node of the node i and the identity identifier of the backup encrypted data to obtain the label data of the node. The specific calculation process is expressed by formula (7):
Hped(v1||...||vd||BID)→vi (7)
wherein v isiLabel data representing node i.
Starting from the second layer, the ith can be calculated by recursive calculations according to equations (5), (6) and (7)dLabel data of each node in layer 1, each data block divided according to dependency relation and ldCarrying out large number addition calculation on the relation of the label data of each node in the layer 1 to obtain an encrypted data block Ri. More specifically, the parent node of each data block is calculated according to formula (6), the parent node including ld-label data of each node in layer 1 and data block of the same layer, and then calculating using formula (8) to obtain encrypted data block.
Bigadd(v1,v2,...,vd,Di)→Ri (8)
The backup encrypted data R is (R)1,R2,...,RN) And constructs a merkel tree of backup encrypted data R.
DExtract(BID,τDR, aux) → D: and the data decryption sub-algorithm is used for decrypting and restoring the backup encrypted data R into the input data D.The decryption algorithm also needs to calculate l firstd-label data of each node in layer 1, then reused
Dpoll (n) → r: a random challenge algorithm for generating a plurality of encrypted data blocks R from the backup encrypted data R for any one backup encrypted data RiThe encrypted data block to be verified is selected. As one implementation, the essence of the algorithm is to randomly pick l challenge sequence numbers from a length-N integer set to generate a challenge vector r ═ (r ═ r)1,...,rl). As another implementation, the challenge sequence number may be generated by a common pseudo-random function or a hash function, for example, the consensus node uses the hash value of the current block as a seed, and scatters l integers with a value less than N by using a hash algorithm. After obtaining the challenge vector, according to the element r in the challenge vector1,...,rlFrom a plurality of encrypted data blocks RiThe encrypted data block to be verified is selected. Therefore, non-interactive dynamic replication certification can be realized, the client does not need to be synchronously executed with the consensus node, and the consensus node can be ensured not to forge the challenge vector r.
The syndrome algorithm is represented by DProve (pp, R, aux, BID, S)t,r)→πR: zero knowledge proof information for generating backup encrypted data. It is necessary to prove the correctness of the computation of the duplication encryption process and the mercker proof of the data block selected based on the challenge vector, and the correctness verification of these proofs is also a computation process and can be computed by using a proof sub-algorithm. And generating zero knowledge proof information of the backup encrypted data by taking the challenge vector, the backup encrypted data and the divided data blocks as input.
In an embodiment, when zero-knowledge proof processing is performed on backup encrypted data, a target serial number is randomly generated, a verification data block is selected from the backup encrypted data according to the target serial number, and a mercker proof of the verification data block is obtained. And processing the elements of the accumulator at the current moment and the Mercker certification of the verification data block by using a certification sub-algorithm in the zero knowledge certification algorithm to obtain zero knowledge certification information and a second public parameter of the backup encrypted data.
For example: executing a random challenge generation algorithm DPoll (N) to obtain l challenge sequence numbers, wherein the l challenge sequence numbers form a challenge vector R, and then executing a proof sub-algorithm DProve (pp, R, aux, BID, S)tR), outputting zero knowledge proof information pi of backup encrypted dataR. The proving sub-algorithm is used for proving the calculation correctness of the copy encryption process and the encrypted data block R selected based on the challenge vectorjWhere j ∈ r.
Where pp denotes a global public parameter, R denotes a backup encrypted data, aux denotes auxiliary data, in particular a merkel proof of an encrypted data block in the backup encrypted data, BID denotes an identity of the assigned encrypted data, StRepresenting the vector of prime numbers in the accumulator when performing the proof sub-algorithm.
S206, the consensus node generates a second transaction request according to the zero-knowledge proof information of the backup encrypted data and the second public parameter.
In the step, the second public parameter comprises an identity identification BID of the backup encrypted data and a challenge vector r, and zero knowledge proof information pi of the backup encrypted data of the consensus nodeRAnd the identity BID of the backup encrypted data and the challenge vector r are packaged into transaction content and sent to the client.
And S207, the consensus node sends a second transaction request to the client.
And S208, the client verifies the zero-knowledge proof information of the backup encrypted data according to the second public parameter.
In this step, the user extracts zero proof of knowledge information pi of the backup encrypted data from the second transaction requestRThe identification BID of the encrypted data and the challenge vector r are backed up, and then the verification sub-algorithm DVerify (pp, pi) is executedRBID, r) and determines a verification result from the output of the verification sub-algorithm.
In the technical scheme, before the input data is copied and encrypted, the input data is subjected to length doubling processing so as to be divided according to the length of a copying encryption boundary, and therefore, data blocks with the same size can be encrypted by using a non-zigzag reverse binary depth robust graph algorithm to obtain backup encrypted data, and the input data with any length can be encrypted. When verifying the correctness of the encryption process and the Mercker proof of the encrypted data block, a zero knowledge proof construction algorithm is used, zero knowledge proof information of backup data is generated, the correctness verification of the encryption process and the Mercker proof of the encrypted data block can be verified by verifying the zero knowledge proof information, the Mercker proof does not need to be verified, the verification process is simplified, and the method and the device are suitable for quantity expansion of consensus nodes and expansion of stored data quantity.
As shown in fig. 7, an embodiment of the present application provides a data processing method based on a block chain, where the data processing method is applied to a block chain system, and the data processing method specifically includes the following steps:
s301, the client side obtains the return time and the first input data identification, and generates a version return request according to the return time and the first input data identification.
In this step, the user may choose which modified version to roll back to when making modifications to the input data. The client receives a first input data identifier of the data to be returned, which is input by a user, and generates a version return request by subpackaging the first input data identifier of the data to be returned and the return time.
S302, the consensus node receives a version returning request sent by the client.
S303, deleting the log domain data block which is associated with the first input data identifier and is behind the returning moment by the consensus node, and generating a non-member certificate of the deleted log domain data block.
In this step, the consensus node extracts the rollback time and the first input data identification from the version rollback request. And determining related log domain data blocks in the log domain according to the first input data identification, determining the log domain data blocks to be deleted according to the adding time of the log domain data blocks, namely taking the log domain data blocks after the returning time of the adding time as the log domain data blocks to be deleted, deleting the log domain data blocks to be deleted in the log domain, and generating non-member certificates of the deleted log domain data blocks.
Wherein, when generating the non-member certification of the deleted log domain data block, the deleting element method of the accumulator is executed according to the deleted log domain data to obtain the element of the accumulator at the second updating moment, so as to realize the prime number vector S of the accumulatortAnd (4) updating. And processing the elements of the accumulator at the second updating moment and the elements corresponding to the deleted log domain data blocks according to a method for generating the non-member certification by the accumulator to obtain the non-member certification of the deleted log domain data blocks. That is, making non-member proof of the corresponding prime number of the log domain data block to be deleted in the prime number vector of the accumulator
Figure BDA0003420198520000141
And using the non-member proof of the prime number as the non-member proof of the deleted log domain data block
Figure BDA0003420198520000142
S304, the consensus node performs zero knowledge proof processing on the non-member proof to obtain zero knowledge proof information and a third public parameter of the non-member proof.
In this step, the zero knowledge attestation algorithm for the non-member attestation of the log domain data block has been described in S104, directly referencing the attestation sub-algorithm described in S104. By performing a witness subroutines algorithm
Figure BDA0003420198520000143
Outputting zero knowledge proof information pi of non-member proofu
S305, the consensus node generates a third transaction request according to the zero-knowledge proof information of the non-member proof and the third public parameter.
In this step, the third common parameter is the value a of the accumulator at the second update timetThe consensus node verifies zero knowledge proof information pi of the non-member proofuAnd a third common parameter AtAnd packaging the transaction content into transaction content and sending the transaction content to the client.
S306, the node is identified and a third transaction request is sent to the client.
S307, the client verifies the zero-knowledge proof information of the non-member proof according to the third public parameter.
In this step, the client extracts the non-member-certified zero-knowledge-certification information pi from the third transaction requestuAnd a third common parameter AtThen the verify sub-algorithm DVerify (pp, π) is performedu,At) And determining a verification result according to the output result of the verification sub-algorithm.
In the technical scheme, when a user needs to return to a previous data modification version, the user initiates a version return request to enable the consensus node to delete a corresponding log domain data block in a log domain, updates a prime number vector of an accumulator, generates a non-member certificate according to the updated prime number vector and a prime number corresponding to the deleted log domain data block, and generates zero knowledge certificate information of the non-member certificate, so that the correctness verification of the non-member certificate can be realized only by verifying the zero knowledge certificate information, the verification process is simplified, and the data version return can be realized without copying and encrypting input data again because the modification operation on the input data is stored in the log domain, and the data processing process is simplified.
As shown in fig. 8, an embodiment of the present application provides a data processing method based on a block chain, where the data processing method is applied to a block chain system, and the data processing method specifically includes the following steps:
s401, the client side obtains the second input data identification and generates a data reading request according to the second input data identification.
In this step, when a user needs to read or download data from the consensus node, a second input data identifier FID of the data to be read is obtained, and the second input data identifier is encapsulated to generate a data reading request.
S402, the consensus node receives a data reading request sent by the client.
And S403, the consensus node acquires the initial label data of any backup encrypted data corresponding to the second input data identifier, and decrypts the backup encrypted data according to the decryption algorithm and the initial label data of the backup encrypted data to acquire the input data.
In the step, after receiving a data reading request, the consensus node analyzes the data reading request to obtain a second input data identifier, searches backup encrypted data corresponding to the second input data identifier, selects any backup encrypted data, obtains initial tag data corresponding to the backup encrypted data, and calculates l according to formulas (5) to (7)d-label data of layer 1, again according to ld-determining a parent node of each encrypted data block from the relationship between the label data of layer 1 and each encrypted data block in the backup encrypted data, and obtaining the input data for the parent node of each encrypted data block and each encrypted data block. Specifically, the calculation is performed by using the formula (9).
Bigmin(v1,v2,...,vd,Ri)→Di (9)
v1,v2,...,vdRepresenting an encrypted data block RiParent node of, DiRepresenting an encrypted data block RiCorresponding decrypted data block, input data D is (D)1,D2,...,Di,...,DN). Bigmin (. cndot.) represents a large number of subtractions.
S404a, when the second input data identification associated log domain data block exists in the operation domain, the consensus node analyzes the associated log domain data block to obtain operation information and operation time.
In the step, the log domain is searched by using the second input data identifier, and if the log domain data block associated with the second input data identifier exists in the operation domain, all the obtained log domain data blocks are analyzed to obtain the operation information and the operation time of the input data.
And S405a, the consensus node updates the third input data according to the operation information and the operation time to obtain updated input data, and sends the updated input data to the client.
In the step, the operation information is sorted according to the sequence of the operation time, the input data is sequentially updated according to the sorted operation information, the updated input data is used as the updated input data, and the updated input data is sent to the client.
S404b, if the operation domain does not have the log domain data block associated with the second input data identifier, the consensus node directly sends the input data to the client.
In the above technical solution, when reading the input data stored in the consensus node, it is necessary to read the backup encrypted data and the initial tag data from the data field, and decrypt the backup encrypted data using the initial tag data to output the input data. The log domain data block associated with the second input data identifier needs to be read from the log domain, and the input data is updated based on the operation information in the log domain data block, so as to provide the modified input data to the user, thereby implementing the operation of reading the data.
As shown in fig. 9, an embodiment of the present application provides a data processing method based on a block chain, where the data processing method is applied to a block chain system, and the data processing method specifically includes the following steps:
s501, the recursion node receives zero knowledge proof information sent by each consensus node in the block chain system and state information of the consensus node when each zero knowledge proof information is generated.
In this step, the zero knowledge proof information includes any one or a combination of zero knowledge proof information of member proof, zero knowledge proof information of non-member proof, and zero knowledge proof information of backup encrypted data.
For each consensus node, when providing data replication storage service, zero knowledge proof information such as member proof zero knowledge proof information of each element in the accumulator, non-member proof zero knowledge proof information of each element in the accumulator, and zero knowledge proof information of backup encrypted data needs to be generated. The consensus node sends the generated zero knowledge proof information to the recursion node, and sends the state information of the consensus node when the zero knowledge proof information is generated to the recursion node, so that the recursion node can generate the root proof and the root state information.
S502, recursion node carries out recursion aggregation processing on each zero knowledge proof information and state information of the consensus node when each zero knowledge proof information is generated, and root zero knowledge proof information and root state information are obtained.
In this step, since all the consensus nodes use the same zero knowledge proof circuit, the zero knowledge proof information they generate can be collected by one recursive node and recursively aggregated. As shown in fig. 10, the recursive aggregation specifically includes the following steps:
s5021, pairwise combination is carried out on each zero knowledge proof information to obtain a plurality of sub-combinations, and the common node states corresponding to the two zero knowledge proof information in the sub-combinations are processed aiming at each sub-combination to obtain the combination state of each sub-combination.
The node state comprises state information such as storage consumption and block height of the consensus node, and the role of the state conversion sub-algorithm is to link the whole recursive process together. Specifically, a state transition sub-algorithm is adopted to process the consensus node states corresponding to the two pieces of zero-knowledge proof information in each sub-combination to obtain the combination state of each sub-combination. And the combination status of the sub-combinations is taken as the respective combination of the first layer.
Taking the first combination of the first layers as an example, the formula is used
Figure BDA0003420198520000161
A combined state of the first combination of the first layer is obtained. st1Indicating the node state, st, corresponding to one of the zero knowledge proof information in the first combination2Indicating the node corresponding to the other zero-knowledge proof information in the first combination,
Figure BDA0003420198520000162
indicating the combination status of the first combination.
S5022, aiming at each sub-combination, zero knowledge proof information of two pieces of zero knowledge proof information in the sub-combination, the corresponding consensus node state of the two pieces of zero knowledge proof information in the sub-combination and the combination state of the sub-combination are subjected to zero knowledge proof to obtain the zero knowledge proof information of each sub-combination.
And particularly, obtaining zero-knowledge proof information of each sub-combination by adopting a recursive proof algorithm. The recursive proof algorithm is used for verifying the correctness of one zero knowledge proof and the other zero knowledge proof in the sub-combinations and converting the node state into the correctness of the combined state.
The recursive proof algorithm is a verification sub-algorithm DVerify (pp, π, BID, r, A) for zero-knowledge proof information of backup encrypted data described in the above embodimentst) Conversion into arithmetic circuits, proof of information (pi) with any two zero knowledge1,π2) Common input (x) required to verify the two zero knowledge proof information1,x2) And node states (st) when generating the two zero knowledge proof information1,st2) Combined states after conversion by recursive states for secret input w
Figure BDA0003420198520000163
For a common input x, zero knowledge proof information for the last output sub-combination
Figure BDA0003420198520000164
The expression of the recursive proof algorithm is:
Figure BDA0003420198520000165
and S5023, grouping the zero knowledge proving information of each sub-combination pairwise to obtain a plurality of father combinations, and processing the combination state corresponding to the two zero knowledge proving information in the father combinations aiming at each father combination to obtain the combination state of each father combination.
After the zero-knowledge proof information of each sub-combination is grouped pairwise to obtain a plurality of father combinations, a state transition sub-algorithm is specifically adopted to process the common-recognition node states of two sub-combinations in the father combinations to obtain the combination state of each father combination.
For example: combining the zero knowledge proof information of the first combination in the first layer and the zero knowledge proof information of the second combination to obtain a combined state of the first combination in the second layer, specifically according to the following formula:
Figure BDA0003420198520000166
wherein the content of the first and second substances,
Figure BDA0003420198520000167
indicating the combined state of the first combination at the first layer,
Figure BDA0003420198520000168
indicating a combined state of a second combination at the first level,
Figure BDA0003420198520000169
indicating the combined state of the first combination at the second level.
The combination status of other combinations of the second layer can also be calculated with reference to equation (10), and will not be described herein.
And S5024, aiming at each father combination, performing zero knowledge proof on the two pieces of zero knowledge proof information in the father combination, the combination state of the child combination in the father combination and the combination state of the father combination to obtain zero knowledge proof information of each father combination.
And performing zero-knowledge proof on the two pieces of zero-knowledge proof information in the parent combination, the combination state of the child combination in the parent combination and the combination state of the parent combination by using a recursive proof algorithm to obtain the zero-knowledge proof information of each parent combination.
Continuing with the example in S5023, equation (10) for calculating zero knowledge proof information for the first combination at the second level is illustrated:
Figure BDA0003420198520000171
wherein the content of the first and second substances,
Figure BDA0003420198520000172
zero knowledge proof information representing a first combination at the first layer,
Figure BDA0003420198520000173
zero knowledge proof information representing a second combination at the first level,
Figure BDA0003420198520000174
and
Figure BDA0003420198520000175
which is indicative of a common parameter,
Figure BDA0003420198520000176
zero knowledge proof information of the first combination at the second layer.
The zero knowledge proof information of other combinations of the second layer can also be calculated with reference to equation (11), and will not be described herein.
And S5025, judging whether the number of the father combinations is 1, if so, entering S5026, and otherwise, turning to S5021.
In this step, when the number of the parent combinations is 1, the recursion is completed, and when the number of the parent combinations is greater than 1, the grouping is continued and the grouped combination state and zero knowledge proof information are calculated, that is, the recursion aggregation is continued until the number of the parent combinations is 1.
S5026, taking zero knowledge proof information of the father combination obtained in the last circulation as root zero knowledge proof, and taking the combination state of the father combination obtained in the last circulation as root state information.
In this step, after aggregation by recursion, zero knowledge proof information of the parent combination obtained in the last cycle is used as root zero knowledge proof pirootAnd the combination status of the parent combination obtained in the last cycle is taken as the root status information stroot
S503, the recursion node generates a fourth transaction request according to the root zero knowledge proof information and the root state information.
In this step, the recursion node generates a fourth transaction request after sub-packaging the zero knowledge proof information and the root state information.
S504, the recursion node sends a fourth transaction request to the consensus node in the blockchain system.
Wherein the fourth transaction request is for the blockchain system to store a root zero knowledge proof based on the root state information. When proving root zero knowledge, RDVerify (pi) is obtained by executing verification algorithmroot,stroot) And (6) carrying out zero knowledge proof, and determining whether to upload the block chain according to a verification result.
In the above technical solution, the recursive zero knowledge proof is a recursive aggregation of the zero knowledge proof itself, and since the verification calculation of the zero knowledge proof is also a program satisfying NP, a further proof can be made to the calculation building circuit as well, a plurality of zero knowledge proofs are used as common input to obtain a recursive zero knowledge proof, and a root zero knowledge proof is obtained after layer-by-layer recursion.
In addition, the recursion node carries out recursion clustering on zero knowledge proof information, the block chain system maintains the state of a data storage system consisting of all the consensus nodes through a consensus algorithm, and verifies the root proof of the space-time proof and the root proof of the dynamic copy proof, so that the consensus nodes and the recursion node can not be badly recognized.
An embodiment of the present application provides a data processing method based on a block chain, where the data processing method is applied to a block chain system, and the data processing method specifically includes the following steps:
s601, the consensus node obtains the generated zero knowledge proof information and the state information of the consensus node when each piece of zero knowledge proof information is generated.
In this step, each consensus node acquires zero knowledge proof information generated for itself and state information of the consensus node at the time of generating each zero knowledge proof information.
S602, performing recursive aggregation processing on each zero knowledge proof information and the state information of the consensus node when each zero knowledge proof information is generated by the consensus node to obtain root zero knowledge proof information and root state information.
The consensus node performs recursive aggregation on the zero knowledge proof information generated by the consensus node to obtain root zero knowledge proof information and root state information, wherein the root zero knowledge proof information is used as a part of the space-time proof, and the root state information is used for verifying the correctness of the root zero knowledge proof information.
The recursive aggregation process is the same as in S502, and is not described in detail here.
And S603, generating a fifth transaction request by the consensus node according to the root zero knowledge certification information and the root state information.
In this step, the consensus node generates a fifth transaction request after sub-packaging the zero-knowledge proof information and the root state information.
And S604, the other consensus nodes send a fifth transaction request.
Wherein the fifth transaction request is for uploading root zero knowledge proof information and root state information to the blockchain system.
In the technical scheme, the number of proofs can be greatly reduced, so that the bandwidth pressure of a block chain is reduced, the verification calculation pressure of a verifier is reduced, and the expansibility of the system is increased without reducing the safety.
As shown in fig. 11, an embodiment of the present application provides an electronic device 700, where the electronic device 700 includes a memory 701 and a processor 702.
The memory 701 is used for storing computer instructions executable by the processor.
The processor 702, when executing computer instructions, performs the steps of the methods in the embodiments described above. Reference may be made in particular to the description relating to the method embodiments described above.
Alternatively, the memory 701 may be separate or integrated with the processor 702. When the memory 701 is provided separately, the electronic device further includes a bus for connecting the memory 701 and the processor 702.
The embodiment of the present application further provides a computer-readable storage medium, in which computer instructions are stored, and when the processor executes the computer instructions, the steps in the method in the foregoing embodiment are implemented.
Embodiments of the present application further provide a computer program product, which includes computer instructions, and when the computer instructions are executed by a processor, the computer instructions implement the steps of the method in the above embodiments.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (17)

1. A data processing method based on a block chain is applied to a consensus node in the block chain, and the method comprises the following steps:
receiving a data modification request sent by a client, wherein the data modification request comprises a first input data identifier and operation information;
processing the first input data identifier and the operation information to obtain a log domain data block and a member certificate of the log domain data block;
performing zero knowledge proof processing on the member proof of the log domain data block to obtain zero knowledge proof information and a first public parameter of the member proof;
and generating a first transaction request according to the zero knowledge proof information proved by the member and the first public parameter, and sending the first transaction request to the client, wherein the first transaction request is used for enabling the client to verify the zero knowledge proof information proved by the member according to the first public parameter.
2. The data processing method according to claim 1, wherein processing the first input data identifier and the operation information to obtain a membership certificate of the log field data block specifically includes:
executing an add element method of an accumulator according to the log domain data to obtain elements of the accumulator at a first updating moment;
processing elements of the accumulator at a first update time according to a generate member attestation method of the accumulator to obtain member attestation of the log domain data block.
3. The data processing method according to claim 2, wherein performing zero-knowledge proof processing on the member proof of the log domain data block to obtain zero-knowledge proof information and a first common parameter of the member proof specifically includes:
and processing the global public parameter, the member certification of the log domain data block, the element of the accumulator at the first updating moment and the value of the accumulator at the first updating moment by using a certification sub-algorithm in the zero knowledge certification algorithm to obtain zero knowledge certification information and the first public parameter of the member certification.
4. A data processing method according to any one of claims 1 to 3, characterized in that the method further comprises:
receiving a data storage request sent by a client, wherein the data storage request comprises input data;
performing double-length supplementary processing on the input data according to a preset copying encryption boundary length to obtain processed input data, and performing segmentation processing on the processed input data to obtain a plurality of data blocks;
copying and encrypting the plurality of data blocks to obtain backup encrypted data and certification auxiliary data corresponding to the backup encrypted data;
performing zero knowledge proof processing on the backup encrypted data according to the proof auxiliary data to obtain zero knowledge proof information and a second public parameter of the backup encrypted data;
and generating a second transaction request according to the zero knowledge proof information of the backup encrypted data and the second public parameter, and sending the second transaction request to the client, wherein the second transaction request is used for enabling the client to verify the zero knowledge proof information of the backup encrypted data according to the second public parameter.
5. The data processing method according to claim 4, wherein performing zero knowledge proof processing on the backup encrypted data according to the proof auxiliary data to obtain zero knowledge proof information and a second common parameter of the backup encrypted data specifically includes:
randomly generating a target serial number, selecting a verification data block from the backup encrypted data according to the target serial number, and acquiring a Mercker proof of the verification data block according to the auxiliary proof data;
processing the element of the accumulator at the current moment and the Mercker proof of the verification data block by using a proof sub-algorithm in a zero knowledge proof algorithm to obtain zero knowledge proof information and a second public parameter of the backup encrypted data; wherein the second common parameter comprises the target sequence number and an identification of the backup encrypted data.
6. The data processing method according to claim 4, wherein the performing of the duplication encryption processing on the plurality of data blocks to obtain the backup encrypted data and the certification auxiliary data corresponding to the backup encrypted data specifically includes:
processing the initial label data of each data block by using a multilayer binary depth robust graph algorithm to obtain the position l of each data blockd-1 layer of label data;
according to each of saidData block ind-the label data of layer 1 encrypts each data block to obtain backup encrypted data; and using the merkel proof of the backup encrypted data as the proof auxiliary data;
wherein ldThe total number of layers of the multilayer two-half depth robust graph is shown.
7. A data processing method according to any one of claims 1 to 3, characterized in that the method further comprises:
receiving a version returning request sent by a client, wherein the version returning request comprises returning time and a first input data identifier;
deleting the log domain data block associated with the first input data identification after the rollback time and generating a non-member attestation of the deleted log domain data block;
carrying out zero knowledge proof processing on the non-member proof to obtain zero knowledge proof information and a third public parameter of the non-member proof;
and generating a third transaction request according to the zero knowledge proof information of the non-member proof and the third public parameter, and sending the third transaction request to the client, wherein the third transaction request is used for enabling the client to verify the zero knowledge proof information of the non-member proof according to the third public parameter.
8. The data processing method of claim 7, wherein the generating the non-member certificate of the deleted log domain data block specifically comprises:
executing a delete element method of an accumulator according to the deleted log domain data to obtain an element of the accumulator at a second update time;
and processing the elements of the accumulator at the second updating moment and the elements corresponding to the deleted log domain data blocks according to a method for generating the non-member certification by the accumulator to obtain the non-member certification of the deleted log domain data blocks.
9. A data processing method according to any one of claims 1 to 3, characterized in that the method further comprises:
receiving a data reading request sent by the client, wherein the data reading request comprises a second input data identifier;
acquiring initial label data of any backup encrypted data corresponding to the second input data identifier, and decrypting the backup encrypted data according to a decryption algorithm and the initial label data of the backup encrypted data to obtain input data;
when a log domain data block associated with a second input data identifier exists in the operation domain, analyzing the log domain data block to obtain operation information and operation time;
and updating the input data according to the operation information and the operation time to obtain updated input data, and sending the updated input data to the client.
10. A data processing method based on a block chain is used for a client side, and the method comprises the following steps:
acquiring a first input data identifier and operation information, and generating a data modification request according to the first input data identifier and the operation information
Sending the data modification request to a consensus node in a block chain system, wherein the data modification request is used for enabling the consensus node to process the first input data identifier and the operation information to obtain a log domain data block and a member certificate of the log domain data block, and performing zero-knowledge certification processing on the member certificate of the log domain data block to obtain zero-knowledge certification information and a first public parameter of the member certificate; and generating a first transaction request according to the member-certified zero-knowledge-certification information and the first public parameter;
and receiving the first transaction request sent by the consensus node, and verifying the zero knowledge certification information certified by the member according to the first public parameter.
11. The data processing method of claim 10, wherein the method further comprises:
acquiring input data and generating a data storage request according to the input data;
sending the data storage request to the consensus node, wherein the data storage request is used for enabling the consensus node to perform double-length supplementary processing on the input data according to a preset copying encryption limit length to obtain processed input data, and performing segmentation processing on the processed input data to obtain a plurality of data blocks; copying and encrypting the plurality of data blocks to obtain backup encrypted data and certification auxiliary data corresponding to the backup encrypted data; performing zero knowledge proof processing on the backup encrypted data according to the proof auxiliary data to obtain zero knowledge proof information and a second public parameter of the backup encrypted data; generating a second transaction request according to the zero knowledge proof information of the backup encrypted data and the second public parameter;
and receiving the second transaction request sent by the consensus node, and verifying the zero knowledge certification information of the backup encrypted data according to the second public parameter.
12. The data processing method according to claim 10 or 11, characterized in that the method further comprises:
obtaining a return time and a first input data identifier, and generating a version return request according to the return time and the first input data identifier;
sending a version returning request to the consensus node, wherein the version returning request is used for enabling the consensus node to delete the log domain data blocks after the returning moment and generate non-member proofs of the deleted log domain data blocks; carrying out zero knowledge proof processing on the non-member proof to obtain zero knowledge proof information and a third public parameter of the non-member proof; and generating a third transaction request according to the zero knowledge proof information of the non-member proof and the third public parameter;
and receiving the third transaction request sent by the consensus node, and verifying the zero-knowledge proof information of the non-member proof according to the third public parameter.
13. The data processing method according to claim 10 or 11, characterized in that the method further comprises:
acquiring a second input data identifier, and generating a data reading request according to the second input data identifier;
sending the data reading request to the consensus node, wherein the data reading request is used for enabling the consensus node to obtain initial tag data of backup encrypted data corresponding to the second input data identifier, and decrypting the backup encrypted data according to a decryption algorithm and the initial tag data of the backup encrypted data to obtain the input data; when a log domain data block associated with a second input data identifier exists in the operation domain, analyzing the associated log domain data block to obtain operation information and operation time; processing the input data according to the operation information and the time to obtain updated input data;
and receiving the updated input data sent by the consensus node.
14. A data processing method based on a block chain, wherein the method is applied to a proving node, and the method comprises:
receiving zero knowledge proof information sent by each consensus node in the block chain system and state information of the consensus nodes when the zero knowledge proof information is generated;
performing recursive aggregation processing on each zero knowledge proof information and the state information of the consensus node when each zero knowledge proof information is generated to obtain root zero knowledge proof information and root state information;
generating a fourth transaction request according to the root zero knowledge proof information and the root state information;
sending the fourth transaction request to a consensus node in the blockchain system, wherein the fourth transaction request is used for enabling the blockchain system to store the root zero knowledge proof according to the root state information;
the zero knowledge proof information comprises any one or more of zero knowledge proof information of member proof, zero knowledge proof information of non-member proof and zero knowledge proof information of backup encrypted data.
15. The data processing method according to claim 14, wherein the recursively aggregating each piece of zero-knowledge proof information and the state information of the consensus node at the time of generating each piece of zero-knowledge proof information to obtain root zero-knowledge proof information and root state information specifically includes:
combining every two zero knowledge proof information to obtain a plurality of sub-combinations; aiming at each sub-combination, processing the consensus node states corresponding to two pieces of zero knowledge proof information in the sub-combination to obtain the combination state of each sub-combination;
aiming at each sub-combination, carrying out zero knowledge proof on two pieces of zero knowledge proof information in the sub-combination, the consensus node state corresponding to the two pieces of zero knowledge proof information in the sub-combination and the combination state of the sub-combination to obtain zero knowledge proof information of each sub-combination;
repeatedly executing pairwise grouping on the zero knowledge proof information of each sub-combination to obtain a plurality of father combinations, and processing the consensus node states corresponding to the two zero knowledge proof information in the father combinations aiming at each father combination to obtain the combination state of each father combination; zero-knowledge proof information of two zero-knowledge proof information in the father combination, a combination state of a child combination in the father combination and a combination state of the father combination are subjected to zero-knowledge proof aiming at each father combination so as to obtain zero-knowledge proof information of each father combination; and when the number of the father combinations is 1, using zero knowledge proving information of the father combinations obtained in the last circulation as the root zero knowledge proving information, and using the combination state of the father combinations obtained in the last circulation as the root state information.
16. An electronic device, comprising: a processor, and a memory communicatively coupled to the processor; the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored by the memory to implement the blockchain-based data processing method of any one of claims 1 to 9, 10 to 13, or 14 or 15.
17. A computer-readable storage medium having stored thereon computer-executable instructions for implementing a method of block chain based data processing as claimed in any one of claims 1 to 9, 10 to 13, or 14 or 15 when executed by a processor.
CN202111559912.9A 2021-12-20 2021-12-20 Data processing method and device based on block chain Pending CN114239025A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111559912.9A CN114239025A (en) 2021-12-20 2021-12-20 Data processing method and device based on block chain
PCT/CN2022/101733 WO2023115873A1 (en) 2021-12-20 2022-06-28 Blockchain-based data processing method, and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111559912.9A CN114239025A (en) 2021-12-20 2021-12-20 Data processing method and device based on block chain

Publications (1)

Publication Number Publication Date
CN114239025A true CN114239025A (en) 2022-03-25

Family

ID=80758986

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111559912.9A Pending CN114239025A (en) 2021-12-20 2021-12-20 Data processing method and device based on block chain

Country Status (2)

Country Link
CN (1) CN114239025A (en)
WO (1) WO2023115873A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116090016A (en) * 2023-04-10 2023-05-09 江苏省国信数字科技有限公司 Block chain data storage privacy protection method, device, equipment and storage medium
WO2023115873A1 (en) * 2021-12-20 2023-06-29 深圳前海微众银行股份有限公司 Blockchain-based data processing method, and device
CN117009358A (en) * 2023-10-07 2023-11-07 腾讯科技(深圳)有限公司 Index data processing method, device, equipment, storage medium and program product

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11323243B2 (en) * 2019-04-05 2022-05-03 International Business Machines Corporation Zero-knowledge proof for blockchain endorsement
EP3794483A4 (en) * 2020-02-03 2021-04-28 Alipay (Hangzhou) Information Technology Co., Ltd. Blockchain-based trustable guarantees
CN112016923A (en) * 2020-08-28 2020-12-01 北京大学深圳研究生院 Intra-network cross-domain identity management method and system based on block chain and computational power network
CN112035886B (en) * 2020-08-31 2023-01-10 深圳前海微众银行股份有限公司 Block chain consensus method, device, consensus node, system and storage medium
CN112132579B (en) * 2020-09-30 2022-08-12 深圳前海微众银行股份有限公司 Block chain consensus node updating method and device
CN112508722B (en) * 2021-01-29 2021-05-25 支付宝(杭州)信息技术有限公司 Policy information verification method and device based on zero knowledge proof
CN114239025A (en) * 2021-12-20 2022-03-25 深圳前海微众银行股份有限公司 Data processing method and device based on block chain

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023115873A1 (en) * 2021-12-20 2023-06-29 深圳前海微众银行股份有限公司 Blockchain-based data processing method, and device
CN116090016A (en) * 2023-04-10 2023-05-09 江苏省国信数字科技有限公司 Block chain data storage privacy protection method, device, equipment and storage medium
CN117009358A (en) * 2023-10-07 2023-11-07 腾讯科技(深圳)有限公司 Index data processing method, device, equipment, storage medium and program product

Also Published As

Publication number Publication date
WO2023115873A1 (en) 2023-06-29

Similar Documents

Publication Publication Date Title
Wang et al. Enabling public verifiability and data dynamics for storage security in cloud computing
CN114239025A (en) Data processing method and device based on block chain
US20140245006A1 (en) Cryptographic accumulators for authenticated hash tables
US10958452B2 (en) System and device including reconfigurable physical unclonable functions and threshold cryptography
CN106897368B (en) Merkle Hash summation tree and verifiable database updating operation method thereof
Fu et al. DIPOR: An IDA-based dynamic proof of retrievability scheme for cloud storage systems
CN111066285A (en) Method for recovering public key based on SM2 signature
Wang et al. A key-sharing based secure deduplication scheme in cloud storage
Esiner et al. Flexdpdp: Flexlist-based optimized dynamic provable data possession
Yi et al. Efficient integrity verification of replicated data in cloud computing system
Li et al. Integrity-verifiable conjunctive keyword searchable encryption in cloud storage
Peng et al. Efficient, dynamic and identity-based remote data integrity checking for multiple replicas
Thangavel et al. Enabling ternary hash tree based integrity verification for secure cloud data storage
Sengupta et al. Efficient proofs of retrievability with public verifiability for dynamic cloud storage
Gao et al. Achieving low-entropy secure cloud data auditing with file and authenticator deduplication
Liu et al. Secure similarity-based cloud data deduplication in Ubiquitous city
Najafi et al. Efficient and dynamic verifiable multi-keyword searchable symmetric encryption with full security
Yu et al. Efficient dynamic multi-replica auditing for the cloud with geographic location
CN114127724A (en) Integrity audit for multi-copy storage
Zhou et al. An efficient and secure data integrity auditing scheme with traceability for cloud-based EMR
Ando et al. Hash-based TPM signatures for the quantum world
Liu et al. A blockchain-based compact audit-enabled deduplication in decentralized storage
Abo-Alian et al. Auditing-as-a-service for cloud storage
Balasubramanian et al. Cloud data integrity checking using bilinear pairing and network coding
Armknecht et al. Sharing proofs of retrievability across tenants

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination