Background
The data outsourcing service provided by the cloud storage brings great convenience to users, is an important infrastructure in the current cloud computing age, and more individuals or organizations migrate data from the local area to the cloud. However, cloud service providers (Cloud Service Provider, CSP) are not fully trusted entities, and present multiple security risks while providing efficient services to users, one of the biggest problems being that the integrity of the data in the CSP cannot be guaranteed. For example, CSPs may be subject to hardware crashes and malicious attacks, resulting in corruption of stored data. Worse still, users prefer to delete copies from local storage after outsourcing data, so that once the integrity of the outsourced data is compromised, the data is permanently lost. It is therefore crucial for the user to regularly verify the integrity of cloud data.
Currently, existing cloud data integrity verification schemes can be divided into private verification schemes and public verification schemes according to the assumption of a verifier model. Only the data owner is allowed to audit the data in the private verification scheme. In contrast, the public verification scheme allows anyone (not just the data owner) to verify the data integrity of the CSP so that the user can delegate tasks to third party audits (Third Party Auditor, TPA) without concern for verification details, and in fact most users may not be burdened with the computational and communication overhead of periodic data integrity verification. Thus, our work is directed to disclosing data integrity verification, which is more practical and reasonable.
From the perspective of outsourcing data types, public data integrity verification schemes can be further divided into two categories, static data integrity verification schemes and dynamic data integrity verification schemes. The former focuses on static data (such as archive data), and users can only re-outsource the updated whole file to the cloud when the data needs to be updated, which brings huge communication overhead; the latter supports dynamic update (such as modification, insertion and deletion) operation on data, and a user only needs to submit a part related to update when updating the data, and most of data in cloud storage is considered to be dynamic data, so that communication overhead can be effectively reduced. The core of the verification method and the verification system can be generalized as how the user periodically performs integrity verification on dynamic data in cloud storage through an effective scheme.
Existing dynamic data integrity verification schemes typically divide a file into file blocks of the same size and organize the file blocks by various data structures (e.g., skip list, hash table), however these schemes still have the following limitations: first, some schemes require that TPA participate in the dynamic update process to record block changes in addition to checking data integrity. However, once accidents such as power failure and system breakdown occur in the TPA, the whole dynamic updating process cannot be continued, so that the instability of the updating process is increased; secondly, part of schemes adopt Merkle Hash Tree (MHT) to organize and record block changes at CSP end, so TPA is not needed to participate in dynamic update process. In these schemes, however, the CSP will return an integrity certificate upon receipt of an integrity verification request for cloud data. The integrity manifest consists essentially of auxiliary authentication information (Auxiliary Authentication Information, AAI) including sibling information for all nodes on the verification path from the leaf node to the root node in the MHT. Note that the length of the validation path is proportional to the height of the MHT, that is to say, as the number of file blocks increases, the size of the returned AAI necessarily increases, resulting in greater communication overhead; finally, in actual operation, the user may perform dynamic update operation (hereinafter referred to as batch dynamic update operation) on a large amount of data at the same time, and since the size of each file block is usually determined at the time of initialization, the user can only repeatedly perform the dynamic update process with the granularity of one small file block, which brings about a large calculation overhead to the CSP side. A typical batch dynamic update operation is typically to insert a large number of file blocks at a given location, which inevitably results in a significant increase in MHT height at that location, which creates a serious tree imbalance problem, i.e., a large difference in verification path length between different leaf nodes in the MHT, with a large time difference occurring each time data integrity is verified.
The Chinese patent application CN103699851A discloses a remote data integrity verification method facing cloud storage, and utilizes the technology of aggregated signature and appointed prover signature to realize the verification function of user and third party audit on the data integrity of the user. The Chinese patent application CN109948372A discloses a remote data holding verification method in cloud storage of a specified verifier, the verifier initiates an integrity challenge to a cloud server, the cloud server generates integrity evidence by using stored data block information and corresponding tag information, but the method does not support a user to dynamically update cloud data while maintaining the integrity evidence, and when the number of challenged file blocks is large in the verification process of the user, the method can cause remarkable increase of communication overhead of both verification parties.
For convenience in description of the present invention, some related arts related to the present invention will be described below.
Bilinear mapping:
set G
1 ,G
2 ,G
T Is a multiplication cyclic group with the same prime order p, g
1 ,g
2 G is respectively
1 ,G
2 Is a generator of (1). There will be one bilinear map e if the following condition is met: g
1 ×G
2 →G
T : 1) Bilinear: for the following
h
2 ∈G
2 and
2) Non-degradability: />
g
2 ∈G
2 So that e (g)
1 ,g
2 ) Not equal to 1; 3) Calculability: there is an efficient algorithm for calculating e.
Homomorphism verifier based on BLS signature:
homomorphic validators (Homomorphic Verifiable Authenticator, HVA) are widely adopted in existing public integrity validation schemes. The HVA allows the TPA to verify the integrity of the cloud data without accessing or retrieving the original data, and the BLS-HVA uses the BLS signature algorithm to generate non-counterfeitable metadata from each file block for the subsequent verification process. In addition, due to the homomorphic nature of the BLS-HVA, TPA only needs to verify the aggregate values returned by the CSP to multiple verifiers to be sure that the linear combination of the corresponding file blocks is calculated correctly, and that the integrity of the file blocks is verified.
Merkle hash tree:
merkle Hash Tree (MHT) is a widely studied authentication structure that can haveIt is effectively and securely verified whether the data is corrupted. It is constructed as a binary tree in which leaf nodes correspond to hash values of blocks, and non-leaf nodes are calculated using cryptographic hashes of their child nodes. FIG. 1 depicts an example of authentication data by MHT, the data owner having authenticated h r Now it is necessary to authenticate the received block x 2 ,x 7 }. Specifically, CSP will AAI information Ω 2 =<h(x 1 ),h d >And omega 7 =<h(x 8 ),h e >Provided to the data owner, the data owner verifies x by the steps of 2 And x 7 : calculate h c =h(h(x 1 )||h(x 2 )),h f =h(h(x 7 )||h(x 8 )),h a =h(h c ||h d ),h b =h(h e ||h f ) And h r =h(h a ||h b ) Then check the calculated h r And authenticated h r Whether or not they are consistent. In this scheme, the leaf nodes are arranged in left to right order, and are uniquely determined by calculating the path of the root node, thereby further verifying the location of the block.
Chameleon hash:
the chameleon hash is a threshold one-way hash function. In particular, the chameleon hash function is incapable of calculating collisions without knowing the threshold, and collisions can be efficiently calculated once threshold knowledge is utilized. One classical process of chameleon hashing is described below.
KeyGen (λ): given a multiplication cyclic group G of a safety parameter lambda, then selecting a generator G's G ' and a random value alpha, calculating v's G ≡ α . The private key sk= (α) is taken as a threshold and the public key pk= (g, v).
Chsh (pk, M, r): given message M and random value r, the chameleon hash value ch=g for message M M ·v r 。
Forge (sk, M, r, M'): in order to map another message M 'to the same chameleon hash value ch as the message M, the collision r' can be efficiently found using the threshold α solution equation m+αr=m '+αr'.
Disclosure of Invention
Aiming at the problem of cloud data integrity in cloud service, the invention provides a cloud data integrity verification method and a cloud data integrity verification system supporting efficient dynamic update, which solve the problems of low efficiency and high cost in the cloud data integrity verification technology.
The technical scheme of the invention is as follows:
a cloud data integrity verification method supporting efficient dynamic update comprises the following steps:
1) The data owner divides the storage file into n file blocks m i Converting the obtained file F into an MHT-list data structure, generating a file tag t of the file F according to a random element u, transmitting the file F, the file tag t, the generated file block homomorphic verifier set phi and the root node signature set SIG to a cloud service provider, and transmitting the file tag t to a third party audit, wherein i is more than or equal to 1 and less than or equal to n;
2) The third party audit extracts a random element u according to the file tag t and sends the generated random challenge chal to the cloud service provider;
3) The cloud service provider obtains an integrity certification P through the file F, the file label t, the file block homomorphic verifier set phi, the root node signature set SIG and the random challenge chal, and returns the integrity certification P to a third party for auditing;
4) And the third party audit obtains an integrity verification result of the cloud data according to the integrity certification P and the random element u, and returns the integrity verification result to the data owner.
Further, the method for generating the storage file comprises the following steps: the original file is encoded using a redundancy code.
Further, a file tag t of the file F is generated by:
1) Randomly generating a signing key pair (spk, ssk) based on the security parameter λ;
2) Selecting random elements
Calculate v+.g
α To obtain a private key sk= (a, ssk), a public key pk= (g, v, spk),wherein->
Is a finite field, p is a large prime number, G is G
2 E: g
1 ×G
2 →G
T Is a bilinear map;
3) Selecting a random element u≡g 1 Calculate file tag t=name n u SSig ssk (name n u), wherein SSig ssk (·) is signed by the signature private key ssk, name is the name of file F, || represents the connection.
Further, a file block homomorphic verifier set Φ is generated by:
1) Calculating each file block m
i Homomorphism verifier of (a)
2) Generating a file block homomorphic verifier set Φ= { σ i }。
Further, a root node signature set SIG is generated by:
1) Corresponding each leaf node of the MHT-list data structure to a file block m i Obtaining a file block m i The value of the corresponding leaf node is H (m i ) Where h is a cryptographic hash function;
2) Using private key sk signature to all root nodes of MHT-list data structure to obtain a root node signature set SIG= { SIG sk (H (Rj)) } wherein 1.ltoreq.j.ltoreq.x, x being the number of MHT included in the MHT-list data structure.
Further, a random challenge chal is generated by:
1) From the collection [1, n]Subset i= { s of random c elements is selected 1 ,...,s c (s is therein 1 ≤...≤s c ,1≤c≤n;
2) For each I' ∈I, a random element is selected
Generating random challenge->
Further, the integrity manifest P is obtained by:
1) Acquiring a file block m corresponding to i i′ Homomorphism verifier sigma i′ ;
2) Challenge chal with file block m randomly
i′ Obtaining parameters
2) Challenge chal and homomorphism verifier sigma by random
i′ Obtaining parameters
3) Challenge chal with file block m randomly i′ Obtaining auxiliary authentication information omega of leaf node i′ ;
4) Generating integrity proof p= (μ, σ, SIG, { H (m) i′ ),Ω i′ })。
Further, the integrity verification result of the cloud data is obtained through the following steps:
1) Using { H (m) i′ ),Ω i′ Calculation of m i′ The root node of the tree;
2) If verify m by using root node signature set SIG i′ The root node of the tree passes through, and the parameters mu and sigma are verified by utilizing bilinear mapping;
3) And if the parameter mu and the parameter sigma pass verification, acquiring an integrity verification result of the integrity of the cloud data.
Further, the data owner modifies the slave file block m in the cloud service provider by the following steps T Beginning consecutive file blocks:
1) Generating a code comprising n mod A verifier set phi 'corresponding to the new file block set M' and comprising a modification start position T, the new file block set M ', the verifier set phi' and the number of file blocks n mod Data modification request of (2) is sent to a cloud service provider;
2) The cloud service provider generates a new file F 'and a verifier set phi' according to the data modification request, and generates a new file F 'and a verifier set phi' in a file block m according to a modification starting position T T A sub-tree ST with the smallest height and containing all leaf nodes to be updated is found in the located MHT-list data structure in ;
3) Cloud service provider update subtree ST in Obtaining updated root nodes and returning update certificates to the data owners;
4) And the data owner verifies and signs the new root node according to the update evidence, further finds out the collision of the new root node, enables the updated MHT-list data structure to be communicated, and finally sends the new root node signature and the found collision generation response message to the cloud service provider.
Further, the data owner inserts the slave file block m in the cloud service provider by the following steps T Beginning consecutive file blocks:
1) Generating a code comprising n ins New set of file blocks M of file blocks * With new file block set M * Corresponding verifier set phi * And will include the insertion start position T, the new file block set M * Validator set phi * And the number of file blocks n ins The data insertion request of (2) is sent to a cloud service provider;
2) The cloud service provider generates a new file F 'and a verifier set phi' according to the data insertion request, and the new file F 'and the verifier set phi' are arranged in a file block m according to an insertion starting position T T Finding a sub-tree ST in the MHT-list data structure that has the smallest height and can accommodate all the leaf nodes to be inserted in ;
3) Cloud service provider inserts new leaf nodes and reconstructs ST in Generating an updated root node and returning the generated insertion evidence to a data owner;
4) And the data owner verifies and signs the new root node according to the update evidence, further finds out the collision of the new root node, enables the updated MHT-list data structure to be communicated, and finally sends the new root node signature and the found collision generation response message to the cloud service provider.
A cloud data integrity verification system supporting efficient dynamic updating, comprising:
Data owner for dividing a storage file into n file blocks m i Converting the obtained file F into an MHT-list data structure, generating a file tag t of the file F according to a random element u, transmitting the file F, the file tag t, the generated file block homomorphic verifier set phi and the root node signature set SIG to a cloud service provider, and transmitting the file tag t to a third party audit, wherein i is more than or equal to 1 and less than or equal to n;
the third party audit is used for extracting a random element u according to the file label t and sending the generated random challenge chal to the cloud service provider; according to the integrity certification P and the random element u, obtaining an integrity verification result of the cloud data, and returning the integrity verification result to a data owner;
the cloud service provider obtains an integrity certification P through the file F, the file label t, the file block homomorphic verifier set phi, the root node signature set SIG and the random challenge chal, and returns the integrity certification P to a third party for audit.
A cloud data integrity verification method supporting efficient dynamic update comprises the following steps:
1) The data owner divides the storage file into n file blocks m i Converting the obtained file F into an MHT-list data structure, generating a file tag t of the file F according to a random element u, and transmitting the file F, the file tag t, the generated file block homomorphic verifier set phi and the root node signature set SIG to a cloud service provider, wherein i is more than or equal to 1 and less than or equal to n;
2) The data owner sends the generated random challenge chal to the cloud service provider;
3) The cloud service provider obtains an integrity certificate P through the file F, the file label t, the file block homomorphic verifier set phi, the root node signature set SIG and the random challenge chal, and returns the integrity certificate P to the data owner;
4) And the data owner obtains an integrity verification result of the cloud data according to the integrity certification P and the random element u.
A cloud data integrity verification system supporting efficient dynamic updating, comprising:
data owner for dividing a storage file into n file blocks m i Converting the obtained file F into an MHT-list data structure, generating a file tag t of the file F according to a random element u, and transmitting the file F, the file tag t, the generated file block homomorphic verifier set phi and the root node signature set SIG to a cloud service provider, wherein i is more than or equal to 1 and less than or equal to n; sending the generated random challenge to a cloud service provider; obtaining an integrity verification result of the cloud data according to the integrity certification P and the random element u;
the cloud service provider is used for obtaining an integrity certificate P through the file F, the file label t, the file block homomorphic verifier set phi, the root node signature set SIG and the random challenge chal, and returning the integrity certificate P to the data owner.
Compared with the prior art, the invention has the beneficial effects that:
firstly, the method solves the problem that TPA needs to participate in and record the dynamic updating process in the existing cloud data integrity verification scheme, and improves the stability of the scheme in the dynamic updating process. Secondly, in order to solve the problem of large communication overhead caused by an overlong verification path in the integrity verification process, the invention combines the characteristics of an MHT data structure, designs a two-dimensional data structure MHT-list, in the structure, file blocks are organized by a series of MHTs stored at a CSP end, root nodes of the MHTs are connected through a linked list, and nodes of the linked list are calculated by a chameleon hash function. The introduction of MHT-list can effectively reduce the MHT height in the previous scheme, so the AAI size and communication overhead will be correspondingly reduced in the integrity verification process. Finally, the invention sets the granularity of batch dynamic update operations to be a subtree of the MHT or a plurality of MHTs according to the number of blocks involved, effectively reducing computation and communication overhead compared to the granularity of operation of a single file block.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail by examples and drawings.
The invention adopts a challenge-response mechanism and aims at verifying the integrity of cloud data. Specifically, the data owner or third party audits the TPA to initiate a challenge to the CSP by randomly extracting a series of file blocks, and the CSP uses the stored cloud data and homomorphic validator to generate evidence and respond to the challenge, and finally validates the return value to determine whether the cloud data is complete. The invention provides the following design for supporting dynamic update in cloud data integrity verification. Firstly, in order to ensure the stability of the dynamic updating process, the TPA only participates in the integrity verification process and does not participate in the dynamic updating process. Secondly, in order to reduce communication overhead caused by overlong verification paths in the verification process, the invention provides a two-dimensional data structure MHT-list, wherein file blocks are organized through a series of MHT at a CSP end, and a tree is connected through a linked list. Third, to support efficient bulk dynamic update operations, the update policy in the present invention causes the CSP to find a certain sub-tree of MHTs or multiple MHTs that contain all leaf nodes to be updated, so the granularity of the bulk dynamic update operations transitions from a single file block to a certain sub-tree of MHTs or multiple MHTs.
Fig. 2 is a diagram of a system model of the present invention consisting of a Data Owner (Data Owner), a third party audit (Third Party Auditor, TPA), a cloud service provider (Cloud Service Provider, CSP).
1) Data Owner (Data Owner): the data owners may be individual users or organizations that outsource data to a storage-resource rich cloud and need to update cloud data over time. The data owner can audit the data himself or delegate the verification task to the TPA to verify the integrity of the outsourced data.
2) Third party audit (Third PartyAuditor, TPA): TPA is a trusted entity that has expertise and capabilities not possessed by the owner of the data. Since TPA has public information of the customer, a fair integrity verification service can be provided on behalf of the customer.
3) Cloud service provider (Cloud Service Provider, CSP): CSP provides sufficient storage space and computing resources for data owners. However, the data stored in the CSP may be tampered with and corrupted by internal or external threats.
Specifically, the technical scheme adopted by the invention is as follows:
the invention can be divided into three phases, namely an initialization phase (Setup), an integrity verification phase (Integrity Verification) and a Dynamic Update phase (Dynamic Update). Each stage contains a plurality of polynomial time algorithms. The specific algorithm involved in the invention is as follows:
1. The key generation algorithm KeyGen (λ) → (pk, sk), which the data owner runs, generates a public key and a private key. It takes as input the security parameter λ and returns the public key pk and the private key sk.
2. The file block signature algorithm SigGen (sk, F) → (Φ, SIG, RV), which the data owner executes, generates the verifier and metadata. The inputs to the algorithm are the private key sk and the ordered set F of file blocks, the output comprising a verifier set Φ, a root signature set SIG and a random value set RV.
3. The proof generating algorithm GenProf (phi, chaI) to P, which the CSP uses to generate the proof after receiving the random challenge. The algorithm inputs the file F, the verifier set Φ, and the challenge chal, and outputs an integrity certification P of the challenged block.
4. The verification proof algorithm VerifyProof (pk, chal, P) → {1,0}, after receiving the proof P, the TPA executes this algorithm to verify the integrity of the cloud data. The algorithm takes the public key pk, challenge chal and the certificate P returned from CSP as inputs and outputs the verification result, 1 indicating that the verification is passed and 0 indicating that the verification is not passed.
5. The dynamic update algorithm ExecDynamic (F, Φ, req) → (F ', Φ', resp) is executed, and the CSP runs this algorithm to perform efficient batch dynamic update operations. The inputs of the algorithm are the file F, the verifier set Φ and the dynamic update request req sent by the data owner, the output comprising the updated file F ', the updated verifier set Φ' and the proof resp for the dynamic update request.
6. Verification dynamic update algorithm VerifyDynamic (pk, req, resp) → { (1, veri), 0, which the data owner uses to verify whether the CSP has correctly performed the dynamic update operation. The algorithm takes as inputs the public key pk, the dynamic update request req and the certificate resp from the CSP. If the verification is successful, outputting 1 and response veri, otherwise outputting 0.
First, we assume that: 1) e: g1×g
2 →G
T Is a bilinear map, where G
1 、G
2 And G
T Is a multiplication cyclic group with order of large prime number p, G is G
2 Is a generator of (1); 2) H: {0,1}
* →G
1 Is a BLS hash function that is considered a random predictor model, h is a cryptographic hash function; 3) Suppose that the data owner encodes the original file as F with redundancy code and equally divides it into n file blocks m
1 ,...,m
n Wherein
In the invention, the main working tasks of each stage are as follows:
in the initialization phase (Setup), the data owner first encodes the original file into a storage file using redundancy code and equally divides into n file blocks m 1 ,...,m n And executing a KeyGen (-) algorithm to generate a public-private key pair, then running a SigGen (-) algorithm to generate homomorphic validators and metadata for the file F, and uploading relevant information to the CSP. Specifically, the present invention relates to a method for manufacturing a semiconductor device.
1) The data owner randomly generates a signing key pair (spk, ssk) based on the security parameter λ;
2) Selecting a random element
Calculate v+.g
α Obtain the data owner private key sk= (α, ssk), the public key pk= (g, v, spk), -j>
For finite fields, p is a large prime number, G is G
2 E: g
1 ×G
2 →G
T Is a bilinear map;
3) Selecting a random element u≡g 1 Calculate file tag t=name n u SSig ssk (name||n||u),SSig ssk (. Cndot.) is signed by signature private key ssk, name is the name of file F, || represents the connection;
4) For each file block m
i Homomorphic verifier for computing
Obtaining a file block homomorphic verifier set phi= { sigma
i I is more than or equal to 1 and less than or equal to n, n is the number of file blocks, and H (·) is the BLS hash function.
Further, the data owner generates an MHT-list data structure (as shown in FIG. 3), in which each leaf node corresponds to a file block, file block m
i The value of the corresponding leaf node is H (m
i )). For subsequent verification of the MHT-list data structure stored in CSP, the data owner signs all root nodes of the MHT-list with a private key according to formula (1), resulting in a set big= { sig of root node signatures
sk (H(R
j ))}
1≤j≤x . To compute the chameleon hash for all linked list nodes in the MHT-list structure, the data owner is relieved from
Is selected from x random values to form a set RV= { r
j }
1≤j≤x 。
sig sk (H(R j ))=(H(R j )) α (1)
Finally, the data owner sends { F, t, phi, SIG, RV } to the CSP, and deletes the local copy in order to reduce the storage overhead of the user.
In an integrity verification stage (Integrity Verification), the TPA can verify data integrity by issuing a challenge to the CSP, which upon challenge will call GenProof (·) to generate a proof and return, and then the TPA performs VerifyProof (·) to audit the correctness of the proof. Specifically, 1) the TPA uses spk to verify the signature of the file label t, and if the verification is passed, the random element u in the TPA is extracted for subsequent calculation; 2) TPA from the collection [1, n ]]Subset i= { s of random c elements is selected
1 ,..,s
c (s is therein
1 ≤…≤s
c . For each i.epsilon.I, select a random element
Generating random challenge->
Sending to CSP; 3) After the CSP receives the challenge, mu and sigma are calculated according to the formula (2), and an integrity certification P is generated and returned to the TPA as shown in the formula (3), wherein +.>
Is leaf node->
Auxiliary authentication information of (a); 4) After TPA has received the integrity certificate P, first use +.>
Calculate { m }
i }
i∈I The root node of the tree, where it is located, verifies the computed root node using the returned root node signature SIG according to equation (4). If the verification passes, the properties of the bilinear map are further used to verify the returned μ and σ according to equation (5). If the verification is passed again, the cloud data is complete, otherwise, the integrity of the cloud data is destroyed.
In the Dynamic Update phase (Dynamic Update), to ensure stability, the process is done jointly by the data owner and CSP without the involvement of TPA. Specifically, the dynamic update phase includes both data modification and data insertion.
Data modification refers to replacing a specified file block, in which case the logical structure of the file and the corresponding data structure remain unchanged. The following assumes that slave block m is to be modified
T Beginning continuous n
mod Blocks, i.e. blocks of files which the data owner wants to use
Replacement of original File Block->
1) The data owner first generates the verifier set corresponding to these new file blocks>
Then send data modification request->
To CSP, wherein->
Representing a data modification operation; 2) The CSP replaces the original file block M and the corresponding verifier according to the received request, and generates an updated file F 'and a verifier set phi'; 3) To lower downSize and communication overhead of low modification proof, CSP at m
T Finding a smallest subtree in the MHT can include all leaf nodes to be modified, and the subtree to be found is ST
in And (3) representing. The granularity of CSP to execute batch data modification is ST
in . 4) CSP update ST
in Generates a modification proof resp as shown in equation (6) and returns it to the data owner. Wherein Ω
in Is ST
in Is the root node's auxiliary authentication information, HM is ST
in Set { H (m)
i )},R
j ' is the root node of the updated MHT; 5) After the data owner receives the modification certificate, omega is used
in And HM generates initial root node R
j Verifying the computed root node with the returned root node signature, further Ω
in And HM were also verified; 6) If the last verification passes, the data owner uses the verified omega
in HM and M' calculate the root node of the updated MHT and match R in resp
j ' comparison; 7) If the previous verification passes, the data owner performs the verification on the new root node R
j ' signature, find r additionally
j Is a collision r of (2)
j ' so that the structure of the updated MHT-list remains connected, a response message veri= (sig) is generated
sk (H(R
j ′)),r
j ') to CSP, where r
j Are elements in the set RV.
resp=(Ω in ,HM,sig sk (H(R j )),R j ′,r j ,ch j ) (6)
Data insertion refers to the insertion of new file blocks at specified locations, which may change the logical structure of the file and the corresponding data structure. The following assumes that at the T-th block m
T Post-insertion continuous n
ins Individual blocks
1) The data owner first generates the verifier set corresponding to these new file blocks>
Then send the data insertInlet request->
To CSP, wherein->
Representing a data insertion operation; 2) CSP sends new file block M according to the received request
* And the corresponding validator phi ' is inserted into the designated position to generate an updated file F ' and a validator set phi '; 3) Because data insertion changes the structure of the MHT-list, the CSP must adjust the structure to record the newly inserted block information. To reduce computational overhead, CSP is at m
T Finding a smallest sub-tree in the MHT can accommodate all leaf nodes to be inserted, and the found sub-tree uses ST
in And (3) representing. Up to now omega
in And HM is generated and returned as part of the subsequent insert proof to the data owner, where Ω
in Is ST
in Is the root node's auxiliary authentication information, HM is ST
in Set { H (m)
i ) }. 4) CSP to ST
in To adjust the structure as a whole, in particular, the CSP inserts new leaf nodes and reconstructs ST
in Generating root node R of updated MHT
j '. As shown in FIG. 4, two embodiments are shown in which L is different
max Resulting in a reconstructed subtree being different. The CSP generates an insert proof resp and returns the insert proof resp to the data owner as shown in a formula (6); 5) After the data owner receives the insertion certificate, omega is used
in And HM generates initial root node R
j Verifying the computed root node with the returned root node signature, further Ω
in And HM were also verified; 6) If the last verification passes, the data owner uses the verified omega
in HM and M
* Calculating the root node of the updated MHT and R in resp
j ' comparison; 7) If the previous verification passes, the data owner performs the verification on the new root node R
j ' signature, find r additionally
j Is a collision r of (2)
j ' so that the structure of the updated MHT-list remains connected, a response message veri= (sig) is generated
sk (H(R
j ′)),r
j ') to CSP.
Because the current MHT even reaches the agreed maximum height L
max At most still only accommodate
File blocks, when n
ins When larger, the CSP needs to reconstruct the original MHT into multiple new MHTs to accommodate all newly inserted leaf nodes. To reduce computational overhead, CSP needs to keep the number of reconstructed MHTs as small as possible, replace pre-update MHTs with these new MHTs, and compute their root nodes, as shown in fig. 5. The remaining steps are similar to those described above and will not be repeated here.
Furthermore, in another embodiment of the present invention, the related functions of the TPA may be implemented by the data owner of the data owner.
The above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and those skilled in the art may modify or substitute the technical solution of the present invention without departing from the spirit and scope of the present invention, and the protection scope of the present invention shall be defined by the claims.