CN103218574A - Hash tree-based data dynamic operation verifiability method - Google Patents

Hash tree-based data dynamic operation verifiability method Download PDF

Info

Publication number
CN103218574A
CN103218574A CN2013101325650A CN201310132565A CN103218574A CN 103218574 A CN103218574 A CN 103218574A CN 2013101325650 A CN2013101325650 A CN 2013101325650A CN 201310132565 A CN201310132565 A CN 201310132565A CN 103218574 A CN103218574 A CN 103218574A
Authority
CN
China
Prior art keywords
data
hash tree
file
cdc
data block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2013101325650A
Other languages
Chinese (zh)
Inventor
邢建川
韩帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN2013101325650A priority Critical patent/CN103218574A/en
Publication of CN103218574A publication Critical patent/CN103218574A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a hash tree-based data dynamic operation verifiability method, which consists of three parts, i.e., a USER, a cloud data center (CDC) and a third party administrator (TPA) connected through a communication network, wherein the USER is used as a party of making a data memory service request, who hopes to store a data file owned by the party into a cloud storage space of the cloud data center, and can be both a personal user and an enterprise user; the CDC is used for responding to the data storage service request of the USER, storing the data file of the USER into the huge data center of the CDC according to a certain rule and managing and maintaining the data file; and the TPA is used as a reliable TPA and is commissioned by the USER to check the completeness and the consistency of the data file stored in the CDC data center. According to the hash tree-based data dynamic operation verifiability method, the verification problem of the completeness and the consistency of the user data file under a cloud computing environment is solved.

Description

A kind of Data Dynamic operation verifiability method based on Hash tree
Technical field
The invention belongs to field of computer technology, relate to a kind of Data Dynamic operation verifiability method based on Hash tree.
Background technology
At present, although can connecting by stable network at a high speed, the cloud computing service provider make the user obtain the remote data storage access services of convenient and efficient, but because many self-characteristics such as virtual, extensive, dynamic-configuration that cloud computing has and extensibility have still brought many potential safety hazards and challenge for the mass data storage service under cloud computing environment.In order to improve storage efficiency and operating factor of memory space; big data file can be split into several little data blocks by the cloud computing service provider usually and store; the geographic position of each data block store and the state of storage all are that the user is unknown, and the user can throw doubt upon to the integrality and the consistance of own data file unavoidably.As a big key index of weighing data storage service, how to guarantee to be stored in the integrality and the consistance of the subscriber data file in the unknown monolithic server cluster of geography, all be a great problem that the cloud computing data storage service is faced all the time.Especially take place in succession in Amazon simple storage service and the Docs of Google service after the accident such as service accidental interruption, whether the user can conceal more security incident in order to save resource and saving cost to the cloud computing service provider and produce great distrust.It is own under the prerequisite that does not expend too much computational resource and time that the user wishes to have the complete mechanism of a cover to make, and has the ability of the data file being carried out integrality and consistance examination.
Relevant research had just begun to carry out before very early, and was obtaining good achievement aspect efficient, verifiability, inquiry property and the restorability of design proposal.At present, mainly contain audit and two types of public audit privately for data integrity and the comparatively general solution of consistance, as shown in Figure 1.
Audit is as the term suggests be exactly that user oneself bears audit work to the data file privately, and public audit then is that the believable third party's auditing bodies of audit trust is finished.Though audit privately because of its logic simply has higher audit efficient, public audit not only can provide safe and reliable Data Audit for the user, has also saved a large amount of computational resources and time for the user to a great extent.Under cloud computing environment, the user unlikely has a large amount of time and efforts and comes the data file of oneself is carried out frequent audit work, this wasting time and energy of task is transferred to the trusted third party's auditing bodies that has reliable audit agreement and total solution finish and can be described as a very good selection.The present invention has the cloud computing data storage security system framework of trusted third party's auditing bodies, at the storage and the operating feature of data file under the cloud computing environment, has designed a Data Dynamic operation verifiability scheme based on MerkleHash Tree.
The research of data storage verifiability aspect just had been subjected to the common concern of industry before very early, the solution that scholars propose has also all obtained certain achievement at aspects such as efficient, verifiability, inquiry property and restorabilitys.But it's a pity that the research contents of Most scholars is confined to the operation to state data file, a lot of achievements in research can not satisfy for the frequent dynamic operation of data file.To once sum up the correlative study achievement before the scholars below.
Aspect the remote storage data authentication, people such as Burns have proposed to guarantee to be stored in the verifiability of data in the insincere storage medium at first, and have proposed one and be called digital proof and have (provable data possession, model concept PDP).People such as Burns use the label based on RSA, have realized the audit function for external data.But people such as unfortunate Burns do not take into full account the dynamic memory problem of data.Subsequently, people such as Ateniese have proposed a new model of supporting the dynamic data operation under the PDP model at the defective that has potential safety hazard based on the PDP model in the process of static store conversion dynamic memory.Regrettably, people's such as Ateniese new model is not supported all master data dynamic operations, comprises that the significant data mode of operation of inserting in operating in does not all have and can appear in the new model.The people such as scientist doctor Wang Cong of Chinese origin of U.S. Illinois Polytechnics have proposed one and have been applied to distributed environment after that, can examine the data file correctness and locate the scheme of possible failure node, but the model class that proposes with people such as Ateniese seemingly, and people's such as Wang Cong scheme also fails to support whole basic dynamic operations.In addition, but Juels and Kaliski have proposed a kind of data accessibility that is called proves (proofof retrievability, conceptual model POR).In their document, but check code and national sampling survey are used to guarantee the accessibility of data, and are stored in the data center with guaranteeing data integrity.Juels and Kaliski are adopted as data block and add the purpose that the method for additional information reaches detection at random.But the POR model has limited the quantity of inquiry, does not also support disclosed audit program.Waters and Shacham have designed the improved POR model of a cover after that, but the ability that provides data to recover for static data only is provided improved model.People such as Papamanthou explore the scholar who makes up dynamic PDP model possibility at first, and the improvement PDP model that they propose possesses complete dynamic operation function, and success avoided using extra data label.In addition, people such as Devanbu have provided in the literature based on the data query of MerkleHash Tree checking solution, at present main flow based on the inquiring and authenticating algorithm of Merkle Hash Tree structure all with it as the basis.Provided a kind of data block verification model based on the POR model in the prior art, this model also is based on Merkle Hash Tree data structure, but the geographic position of not considering data storage under distributed environment is for influence that counting yield produced.
Summary of the invention
In order to solve the problems of the technologies described above, the invention provides a kind of Data Dynamic operation verifiability scheme based on Merkle Hash Tree is to be connected to form by communication network by user (USER), cloud computing data center (CDC) and third party's auditing bodies (TPA) three parts.USER is as proposition one side of data storage service request, and hope is stored one's own data file among the cloud storage space of cloud computing data center into.USER both can be the personal user, also can be the enterprise customer.CDC is responsible for responding the request of user's data stores service, according to certain rule the user's data file storage is arrived own huge data center, and the management maintenance of data file is responsible for.TPA is subjected to the trust of USER that the data file that is stored in CDC data center is carried out integrality and conforming examination as reliable third party's auditing bodies.Scheme is utilized the special tree structure of Merkle Hash Tree on the basis with reference to previous achievement in research, solved under the cloud computing environment for subscriber data file integrality and conforming validation problem.Not only support master data dynamic operation all under the cloud computing environment (comprising increase, deletion and the modification etc. of data) effectively, also fully take into account of the influence of the geographic position of data storage under the distributed environment, to having made certain contribution based on the research of Merkle Hash Tree verifiability aspect for counting yield.
Its technical scheme is as follows:
A kind of Data Dynamic operation verifiability method based on Hash tree may further comprise the steps:
The pre-service of A file
File carries out before the pre-service, and USER can propose the storage request of data file to CDC.CDC carries out authentication according to its predefined access control rule to USER.Just can obtain to carry out the authority of file storage by the validated user of CDC authentication.
USER is by after the authentication of CDC, and CDC just begins to receive the data file that USER need store, and it is carried out pre-service:
(1) at first, data file will be divided into several identical data blocks of size, F → (f1, f2, f3, f4, f5, f6 ... fn).
(2) then, CDC carries out Hash operation to each data block and obtains cryptographic hash H (fi) (1≤i≤n) of all data blocks.
(3) after the Hash operation, data file can be designated as: F ' → (f1+H (f1), f2+H (f2), f3+H (f3) ..., fn+H (fn)).CDC will temporarily preserve the cryptographic hash of all data blocks, for the verification msg structure Merkle Hash Tree that next constructs file prepares.
(4) for data block being carried out unique geographic position mark, scheme has been added the location tags (LTag) of one 5 byte-sized for each data block, and label is made up of the order label information of 2 bytes, the frame label information of 1 byte and the vertex ticks information of 2 bytes.What the order label information write down is the order numbering of data block in all data blocks, what the frame label information write down is the concrete frame numbering of this data block store in data center, and vertex ticks information is then indicated the concrete numbering of the node server of data block store.CDC can safeguard a LTag list of labels for each data file, the LTag label information of all data blocks of record this document.
(5) after the label interpolation finished, all data blocks will be deposited in data center by CDC, and the geographical location information that data block is stored also can be recorded among the LTag label of data block oneself.The LTag tag record for the treatment of all data blocks finishes, and CDC will create a LTag label information tabulation that comprises all data blocks geographical location information for data file.
(6) subsequently, carry out the structure of key data structure Merkle Hash Tree.
1) at first, CDC takes out the frame value Rack (LTag (fi)) of all data blocks of file from the Ltag list of labels, the data block that is stored in the different frames is divided, and deposits the ListRack[of corresponding frame in] tabulation.
2) then, CDC is successively from the ListRack[of different frames] take out the nodal information in all data block Ltag labels the tabulation, the data block that is stored in the different nodes of same machine frame is divided once more.
3) then, to being stored in the data block of same machine frame same node point, CDC will take out the order information in their Ltag labels, arrange according to the order of sequence, as leaf node structure node Merkle Hash Tree, and calculate its root node value (if node only has a data block with the cryptographic hash of data block, then the cryptographic hash of this data block just is the root node value of this node M erkle Hash Tree), be designated as NRoot[i, j] (wherein i represents place frame numbering, and j represents the place node serial number).
4) finish after the evaluation work of the structure of all node M erkle Hash Tree of file storage and root node value, all node M erkle Hash Tree root node values that CDC will be stored in same machine frame sort by node sequence, it is constructed the Merkle Hash Tree of each frame as leaf node, and calculate its root node value (if frame only has a node storing data files, then the Merkle Hash Tree root node value of this node just is the root node value of frame Merkle Hash Tree), be designated as RRoot[i] (wherein i represents the frame sequence number).
5) last, root node value to the organic frame Merkle Hash Tree of institute is arranged according to the order of sequence, their root node value is constructed the Merkle Hash Tree of file as leaf node, and calculate its root node value (if a frame storing data files is only arranged, then the Merkle Hash Tree root node value of this frame just is the root node value of file Merkle Hash Tree), be designated as FRoot.
The checking of B data file
(1) at first, CDC finishes the data file storage work of USER, and submits the relevant authorization information of USER institute storing data files to TPA.Data file is before being deleted by USER, as long as change has taken place, CDC will be responsible for data file and generate up-to-date authorization information, and carry out real-time update with TPA.
(2) then, the storage of CDC notice USER data file and the generation work of authorization information are finished, and inform that USER can begin to entrust TPA that the data file is carried out integrality and conforming checking.
(3) USER can select horse back or alternative other times and TPA to communicate, and entrusts TPA that the data file is carried out integrality and consistency checking.
(4) TPA can carry out regular or irregular auditing verification to the data file according to the demand of USER after request is entrusted in the audit of receiving USER, and all audit operation reservation checking daily records are checked after day for USER.In the proof procedure, if the situation of data file authentication failed, then TPA will inform USER by communication modes such as mail or notes, require CDC to carry out the remedial measuress such as copy recovery of data file simultaneously.
The proof procedure of data file is meant that USER entrusts regular irregular data file integrality and the consistance examination that TPA carried out in the scheme.TPA passes through to propose to CDC the integrality and the consistency checking request of data file, and the authorization information that CDC replys is examined, and finishes the checking work that USER entrusts.Specifically describe as follows:
1) Yan Zheng initial period, TPA generation authorization information checkM (UInfo, FInfo, RInfo (FRoot, RRoot[i], NRoot[i, j])).CheckM checking message is made up of three parts: the one, and user's personal information UInfo, UInfo can help the owner of the data file that the rapidly accurate location of CDC need check; The 2nd, need the data file information FInfo of examination, FInfo is clear and definite points out the data file that need check; The 3rd, concrete Merkle Hash Tree root node information RInfo, RInfo is by data file Merkle Hash Tree root node value FRoot, appointment frame Merkle Hash Tree root node value RRoot[i] and specified node Merkle Hash Tree root node value NRoot[i, j] three parts composition.
2) after checking message checkM generated, TPA can send checkM to CDC by communication network.
3) after CDC receives authorization information, at first can decompose, determine the related content that TPA will examine checkM.
4) then, CDC uses the relevant Merkle Hash Tree generating algorithm of above-mentioned A joint introduction to regenerate and calculate corresponding Merkle Hash Tree and root node value thereof.
5) finish after the generation work of all authorization informations, CDC will generate checking return information respondM (UInfo, FInfo, RInfo ' (FRoot ', RRoot ' [i], NRoot ' [i, j])).Wherein, preceding two parts of the content of UInfo and FInfo and checkM authorization information are consistent, RInfo ' then returns the relevant Merkle Hash Tree root node value that TPA need examine, comprise the frame Merkle Hash Tree root node value RRoot ' [i] of designated data files Merkle Hash Tree root node value FRoot ', appointment and node M erkle Hash Tree root node value NRoot ' [i, j] three parts of appointment.
6) last, TPA will compare with the relevant Merkle Hash Tree root node value of self preserving after receiving respondM checking return information.If unanimity is then examined failure, data integrity and consistent with previous version as a result.Otherwise, illustrating that then data are inconsistent with previous version, TPA will require CDC to carry out abnormality processing such as the copy recovery operation of data file, and examination result is informed USER.
The dynamic operation of C data
The insertion operation of C1 data
Inserting operation is the most basic dynamic operation content of data file, also is dynamic operation the most complicated in the scheme with respect to modification and deletion action.For example, insert data block fi ' after i data block fi of data file, the operation steps of scheme can be described below:
(1) CDC calculates the cryptographic hash H (fi ') of the data block fi ' make new advances, and creates label information LTag (fi ') for the data block fi ' that need insert operation.
(2) generate and handling function Insert (fi ', H (fi '), LTag (fi) are inserted in operation), by the particular location that LTag (fi) locator data piece inserts, carry out the insertion operation of data block.
(3) call Merkle Hash Tree and insert handling function MerkleInsert (h (fi), h (fi '), LTag (fi)), this function carries out attended operation to h (fi) and h (fi '), and the numerical value after linking is carried out Hash once more operate.CDC uses the new result of Hash operation generation to substitute the cryptographic hash of LTag (fi) nodes of locations among the original node M erkle Hash Tree then.It should be noted that to link back numerical value to carry out the result that Hash obtains no longer be the leaf node of Merkle Hash Tree, but as the intermediate node by two leaf node generations of h (fi) and h (fi ').
(4) call the Merkle Hash Tree generating function CreateMerkle () of fi ' place node, generate new node M erkle HashTree, and calculate its root node value NRoot[i, j], the node at i representative data piece place wherein, j representative data piece place frame.
(5) call the Merkle Hash Tree generating function CreateMerkle () of fi ' place frame, generate new frame Merkle Hash Tree, and calculate its root node value RRoot[i], the frame at i representative data piece place wherein.
(6) call file Merkle Hash Tree generating function CreateMerkle () again, generate new file Merkle Hash Tree, and calculate its root node value FRoot.
(7) CDC reports lastest imformation to TPA, transmission update request information UpdateRequest (UInfo, FInfo, RInfo (FRoot, RRoot[i], NRoot[i, j]).Wherein UInfo, FInfo are identical with the definition of B joint with RInfo.
(8) TPA upgrades the authorization information of data file according to the particular content of update request information, in order to the usefulness of examination later on.
The retouching operation of C2 data
Retouching operation is Data Dynamic operation the most frequent during the cloud meter is used.For example the i blocks of data piece fi to the data file makes amendment, and i blocks of data piece will become fi ' after revising, and the concrete operations step of scheme is described below:
(1) CDC calculates the cryptographic hash H (fi ') of the data block fi ' make new advances, and uses the label information LTag (fi) of fi as the label information LTag of fi ' (fi ').
(2) generate and handling function Update (fi ', H (fi '), LTag (fi) are upgraded in operation), the particular location of the data block that need make amendment by LTag (fi) location carries out the retouching operation of data block.
(3) call Merkle Hash Tree retouching operation function MerkleUpdate (fi ', H (fi '), LTag (fi ')), the particular location of the data block that need make amendment by LTag (fi ') location substitutes H (fi) with H (fi ').
(4) call the Merkle Hash Tree generating function CreateMerkle () of fi ' place node, generate new node M erkleHash Tree, and calculate its root node value NRoot[i, j], the node at i representative data piece place wherein, j representative data piece place frame.
(5) call the Merkle Hash Tree generating function CreateMerkle () of fi ' place frame, generate new frame MerkleHash Tree, and calculate its corresponding root node value RRoot[i], the frame at i representative data piece place wherein.
(6) call file Merkle Hash Tree generating function CreateMerkle (), generate new file Merkle Hash Tree, and calculate its root node value FRoot.
(7) CDC reports lastest imformation to TPA, transmission update request information UpdateRequest (UInfo, FInfo, RInfo (FRoot, RRoot[i], NRoot[i, j]).Wherein UInfo, FInfo are identical with the definition of B joint with RInfo.
(8) TPA upgrades the authorization information of data file according to the particular content of update request information, in order to the usefulness of examination later on.
The deletion action of C3 data
For example with the deletion of the data block fi+1 after i the data block fi of data file, the concrete operations step of scheme is described below:
(1) generate and operation deletion action function Delete (fi+1, H (fi+1), LTag (fi+1)), the particular location of the data block that need delete by LTag (fi+1) location carries out the deletion action of data block.
(2) call Merkle Hash Tree deletion action function MerkleDelete (LTag (fi+1), LTag (fi)) function, with the locational h of LTag (fi+1) (fi+1) deletion, use h (fi) value to substitute the Merkle Hash Tree intermediate node h (h (fi)+h (fi+1)) of h (fi) and the generation of h (fi+1) Hash.Make Merkle Hash Tree reduce by two leaf nodes though it should be noted that deletion action, the basic logical organization of Merkle Hash Tree does not change.
(3) call the Merkle Hash Tree generating function CreateMerkle () of fi place node, generate new node M erkleHash Tree, and calculate its root node value NRoot[i, j], the node at i representative data piece place wherein, the frame at j representative data piece place.
(4) call the Merkle Hash Tree generating function CreateMerkle () of fi place frame, generate new frame Merkle Hash Tree, and calculate its corresponding root node value RRoot[i], the frame at i representative data piece place wherein.
(5) call file Merkle Hash Tree generating function CreateMerkle (), generate new file Merkle Hash Tree, and calculate its root node value FRoot.
(6) CDC reports lastest imformation to TPA, transmission update request information UpdateRequest (UInfo, FInfo, RInfo (FRoot, RRoot[i], NRoot[i, j]).Wherein UInfo, FInfo are identical with the definition of B joint with RInfo.
(7) TPA upgrades the data file of USER according to the particular content of update request information, in order to the usefulness of examination later on.
Further preferred, (2) CDC among the described A carries out the Hash operation to each data block and obtains the cryptographic hash of all data blocks.Scheme Choice SHA-1 carries out the Hash operation as hash function to data block.
Further preferred, the structure of key data structure Merkle HashTree is carried out in (6) among the described A.Scheme regulation, the structure that the cryptographic hash of all data blocks of data file is set as the leaf node of Merkle Hash Tree.Scheme scholar different from the past only carries out the structure of file Merkle Hash Tree according to the order information series arrangement of data block, but will be with reference to the LTag label information of all data blocks, according to first node, order node M erkleHash Tree, frame Merkle Hash Tree and the file Merkle Hash Tree of construction data file successively of frame, back file again, and calculate its root node value.
Further preferred, 1 in (4) among the described B), the initial period of checking, TPA generation authorization information checkM (UInfo, FInfo, RInfo (FRoot, RRoot[i], NRoot[i, j])).Scheme allows TPA to select one, two or three in the Merkle Hash Tree root node information to examine arbitrarily.
Further preferred, the insertion operation of the data among the described C1.Scheme will be inserted to operate to be defined as and insert a new data block after certain certain data block of data file.The scheme regulation, new data block of inserting and the certain data block before it will be stored among the same server node.Therefore, the logical organization that does not change single server node M erkleHash Tree is operated in the insertion of data in the scheme.
Further preferred, the retouching operation of the data among the described C2.Scheme is defined as the data block of using new data block that needs are revised with basic data modification operation and replaces.The scheme regulation, amended data block is identical with its modification memory node before.Therefore, the retouching operation of data does not change the logical organization of single server node M erkle Hash Tree in the scheme.
Further preferred, the deletion action of the data among the described C3.Scheme is defined as it data block after data file certain data block is deleted.If scheme regulation institute deleted data piece is the unique data piece of its place node, then step 2 will directly be operated frame Merkle Hash Tree among the C3, and skips steps 3 and step 4 directly begin to carry out from step 5 then.
Beneficial effect of the present invention:
The present invention is based on certain safety hypothesis (communication channel is reliable, TPA is credible and CDC loyalty etc.).Therefore, the checking audit work with data file that USER not only can be relieved is transferred to TPA and is handled, and the maintenance of authorization information and evaluation work also needn't be again by the USER do-it-yourselfs.Scheme has made full use of huge computational resource and the storage space of CDC under the cloud computing environment, has not only brought into play extensive, the virtualized advantage that cloud computing had to greatest extent, has saved a large amount of user resources.And, compare scholars' achievement in research before, scheme can be supported master data dynamic operation all under the cloud computing environment (insertion, deletion and modification etc.).Aspect the integrality and consistency checking of data, scheme has used the root node value of the MerkleHash Tree that divides level as authorization information, and the characteristic distributions of this data structure data storage under advantage that is had aspect data integrity and the conforming checking and cloud computing environment is organically merged.Aspect the calculating of the structure of Merkle Hash Tree and root node value, scheme has been utilized the LTag label that is marked with the data block store geographical location information dexterously, not only reduced effectively in the computation process of Merkle HashTree root node value because the spent plenty of time of CDC intercommunication, USER is carried out after the dynamic operation the data file, system needn't re-use the cryptographic hash of data file all data blocks and carry out the reconstruct of file Merkle Hash Tree and the calculating of root node value thereof, has promoted the dynamic operation generating rate of data file authorization information afterwards effectively.In addition, LTag label and the proposition of Merkle Hash Tree design philosophy by different level, the scheme that makes allows TPA that the single server node or the frame of storage data block among the CDC are carried out auditing verification.Compare the scheme of not carrying out layered structure Merkle Hash Tree, this programme can pass through the checkM authorization information, and particular memory node or frame are verified.Occur when inconsistent in checking, quickly and accurately the locate failure node.Simultaneously, aspect MerkleHash Tree reconstruct, owing to do not need to use all data blocks of file to carry out the reconstruct of file Merkle Hash Tree, so scheme is having bigger odds for effectiveness aspect the time.Compare with scholars' achievement in research before, scheme of the present invention has certain odds for effectiveness.
Description of drawings
Fig. 1 is the type of audit of background technology medium cloud computational data file;
Fig. 2 carries out the process flow diagram of USER authentication for CDC;
Fig. 3 is the file block synoptic diagram;
Fig. 4 is a LTag tag format synoptic diagram;
Fig. 5 is the structure synoptic diagram of Merkle HashTree;
Fig. 6 is that the frame of data block is divided process flow diagram;
Fig. 7 is the node division process flow diagram of data block;
Fig. 8 is the structure of node M erkle Hash Tree and the calculation flow chart of root node value;
Fig. 9 is the structure of frame Merkle Hash Tree and the calculation flow chart of root node value;
Figure 10 is the structure of file Merkle Hash Tree and the calculation flow chart of root node value;
Figure 11 is that the basic task of USER, CDC and TPA is described figure;
Figure 12 is the proof procedure figure of data file;
Figure 13 is the insertion operation chart of Merkle Hash Tree;
Figure 14 is the retouching operation synoptic diagram of Merkle Hash Tree;
Figure 15 is the deletion action synoptic diagram of Merkle Hash Tree;
Figure 16 is the reconstitution time comparison diagram of deletion action;
Figure 17 is the reconstitution time comparison diagram of retouching operation;
Figure 18 is for inserting the reconstitution time comparison diagram of operation.
Embodiment
Below in conjunction with the drawings and specific embodiments technical scheme of the present invention is done explanation in further detail.
The safety supposition
Scheme supposes that all there is not the situation (communication channel comprises between USER and the CDC, three parts between CDC and the TPA and between TPA and the USER) of large tracts of land data-bag lost in all communication channels.Simultaneously, the TPA in the scheme is impartial, complete believable third party's auditing bodies, can loyal finish all tasks that USER entrusts.The safety supposition of CDC is slightly different with research in the past in the scheme, and CDC no longer is fly-by-night fully, but have certain curiosity but can loyalty finish all tasks.CDC guarantees own all calculation of parameter results' that born correctness, no duplicity and non repudiation, and can unconditionally respond the audit request of any time for any subscriber data file.There is not the possibility of phase mutual interference in all customer data file through after the pre-service in the scheme between the data block.In addition, the focus of scheme is how to support to the integrality of the subscriber data file that is stored in CDC and the dynamic operation of consistency checking and data.
The file pre-service
File carries out before the pre-service, and USER can propose the storage request of data file to CDC.CDC carries out authentication according to its predefined access control rule to USER.Just can obtain to carry out the authority of file storage by the validated user of CDC authentication, as shown in Figure 2.
USER is by after the authentication of CDC, and CDC just begins to receive the data file that USER need store, and it is carried out pre-service:
At first, data file will be divided into several identical data blocks of size, F → (f 1, f 2, f 3, f 4, f 5, f 6... f n), as shown in Figure 3.
Then, CDC each data block is carried out Hash operation and obtain all data blocks cryptographic hash H (f i) (1≤i≤n).Scheme Choice SHA-1 carries out the Hash operation as hash function to data block.After the Hash operation, data file can be designated as: F ' → (f 1+ H (f 1), f 2+ H (f 2), f 3+ H (f 3) ..., f n+ H (f n)).CDC will temporarily preserve the cryptographic hash of all data blocks, for the verification msg structure Merkle Hash Tree that next constructs file prepares.
For data block being carried out unique geographic position mark, scheme has been added the location tags (LTag) of one 5 byte-sized for each data block, and label is made up of the order label information of 2 bytes, the frame label information of 1 byte and the vertex ticks information of 2 bytes.What the order label information write down is the order numbering of data block in all data blocks, what the frame label information write down is the concrete frame numbering of this data block store in data center, and vertex ticks information is then indicated the concrete numbering of the node server of data block store.CDC can safeguard a LTag list of labels for each data file, the LTag label information of all data blocks of record this document.The concrete form of label as shown in Figure 4.
The LTag label to scheme follow-up operation played very important effect.Current, adopt the preferential data block store allocative decision of speed, more and more be subjected to insider's favor.The speed priority principle make data block can quick storage among the inner idle server node of data center, effectively avoided the queuing in the data block store process and waited for phenomenon.For example, one has data file that has 10000 data blocks of data center's storage that 10 each frames of frame have 100 nodes, each node can't on average be stored 10 data blocks, but will be according to data block store the time in each frame the state of server node come the particular location of specified data piece storage.For data block add the LTag label clear and definite the concrete node location information of all data block store, this provides important position information for hereinafter structure, the calculating of root node value and the location of wrong node of Merkle Hash Tree, makes scheme can access certain lifting aspect efficient.
After the label interpolation finished, all data blocks will be deposited in data center by CDC, and the geographical location information that data block is stored also can be recorded among the LTag label of data block oneself.The LTag tag record for the treatment of all data blocks finishes, and CDC will create a LTag label information tabulation that comprises all data blocks geographical location information for data file.
Subsequently, carry out the structure of key data structure Merkle Hash Tree.Scheme regulation, the structure that the cryptographic hash of all data blocks of data file is set as the leaf node of Merkle Hash Tree.Scheme scholar different from the past only carries out the structure of file Merkle Hash Tree according to the order information series arrangement of data block, but will be with reference to the LTag label information of all data blocks, according to first node, order node M erkle Hash Tree, frame Merkle Hash Tree and the file Merkle Hash Tree of construction data file successively of frame, back file again, and calculate its root node value.Construction algorithm is described as shown in Figure 5:
At first, CDC takes out the frame value Rack (LTag (f of all data blocks of file from the Ltag list of labels i)), the data block that is stored in the different frames is divided, deposit the ListRack[of corresponding frame in] tabulation.Then, CDC is successively from the ListRack[of different frames] take out the nodal information in all data block Ltag labels the tabulation, the data block that is stored in the different nodes of same machine frame is divided once more.Then, to being stored in the data block of same machine frame same node point, CDC will take out the order information in their Ltag labels, arrange according to the order of sequence, as leaf node structure node Merkle HashTree, and calculate its root node value (if node only has a data block with the cryptographic hash of data block, then the cryptographic hash of this data block just is the root node value of this node M erkle Hash Tree), be designated as NRoot[i, j] (wherein i represents place frame numbering, and j represents the place node serial number).Finish after the evaluation work of the structure of all node M erkle Hash Tree of file storage and root node value, all node M erkle Hash Tree root node values that CDC will be stored in same machine frame sort by node sequence, it is constructed the Merkle Hash Tree of each frame as leaf node, and calculate its root node value (if frame only has a node storing data files, then the Merkle Hash Tree root node value of this node just is the root node value of frame Merkle Hash Tree), be designated as RRoot[i] (wherein i represents the frame sequence number).At last, root node value to the organic frame Merkle Hash Tree of institute is arranged according to the order of sequence, their root node value is constructed the Merkle Hash Tree of file as leaf node, and calculate its root node value (if a frame storing data files is only arranged, then the Merkle Hash Tree root node value of this frame just is the root node value of file Merkle HashTree), be designated as FRoot.
Each step structure algorithm of Merkle Hash Tree and code and flow chart description are as follows in the scheme:
The scheme regulation is divided the stage in the frame of data block, will all data blocks of forming data file be traveled through, and according to the frame information under all data blocks data block is classified, and marks off the data block that belongs to different frames, as shown in Figure 6.
Figure BSA00000880170000101
After the frame of data block was divided end, scheme will be carried out node division to the data block that belongs to different frames respectively.The scheme regulation, CDC will travel through all data blocks that belong to same frame, according to the difference of frame under them it is divided once more, as shown in Figure 7.
Finish after the division of data block, just can carry out the structure of the key data structure Merkle Hash Tree of scheme.CDC will at first make up the Merkle Hash Tree of all nodes.CDC can arrange the cryptographic hash that belongs to the data block of same node according to the order of sequence, calls the structure that the achievement function carries out Merkle Hash Tree then.After node M erkle Hash Tree structure is finished, CDC also will calculate the root node value of all node M erkle Hash Tree respectively, as shown in Figure 8.
Figure BSA00000880170000103
The structure of scheme mid frame Merkle Hash Tree then is to use the root node value of interdependent node Merkle Hash Tree to make up as its leaf node.CDC will at first arrange the node M erkle Hash Tree root node value that belongs to same frame according to the order of sequence, call the achievement function then and make up frame Merkle Hash Tree.After the organic frame Merkle Hash Tree of institute makes up and finishes, CDC will calculate its corresponding root node value, as shown in Figure 9.
Figure BSA00000880170000111
Structure and the frame Merkle Hash Tree of file Merkle Hash Tree are similar, and only at the building process of file Merkle HashTree, CDC uses is the root node value of the organic frame Merkle Hash Tree of institute and calculates its root node value.CDC can arrange according to the order of sequence to the organic frame Merkle Hash Tree of institute root node value, calls the Merkle Hash Tree that the achievement function makes up file then.At last, CDC can calculate the file Merkle Hash Tree root node value of data file, as shown in figure 10.
Figure BSA00000880170000112
The checking of data file
Scheme has been constructed the Merkle Hash Tree of node, frame and file respectively in the pretreated process of file, and has calculated its corresponding root node value.All root node values all will become the main authorization information of scheme.The scheme regulation, all authorization informations of data file not only can be kept among the CDC, also can have backup in TPA.CDC is responsible for the latest authentication information by communication network and TPA real-time update data file, so that TPA can finish data integrity and conforming checking work that USER entrusts.The scheme regulation, TPA needs whole node M erkle Hash Tree root node values of the up-to-date LTag list information of real-time update data file, institute's storing data files, whole frame Merkle Hash Tree root node values of institute's storing data files and the file Merkle Hash Tree root node value of institute's storing data files.In addition, TPA knows the create-rule and the method for all authorization informations of data file.
Below USER, CDC and the interoperability of TPA in scheme are once described, as shown in figure 11:
At first, CDC finishes the data file storage work of USER, and submits the relevant authorization information of USER institute storing data files to TPA.Data file is before being deleted by USER, as long as change has taken place, CDC will be responsible for data file and generate up-to-date authorization information, and carry out real-time update with TPA.Then, the storage of CDC notice USER data file and the generation work of authorization information are finished, and inform that USER can begin to entrust TPA that the data file is carried out integrality and conforming checking.USER can select horse back or alternative other times and TPA to communicate, and entrusts TPA that the data file is carried out integrality and consistency checking.TPA can carry out regular or irregular auditing verification to the data file according to the demand of USER after request is entrusted in the audit of receiving USER, and all audit operation reservation checking daily records are checked after day for USER.In the proof procedure, if the situation of data file authentication failed, then TPA will inform USER by communication modes such as mail or notes, require CDC to carry out the remedial measuress such as copy recovery of data file simultaneously.
The proof procedure of data file is meant that USER entrusts regular irregular data file integrality and the consistance examination that TPA carried out in the scheme.TPA passes through to propose to CDC the integrality and the consistency checking request of data file, and the authorization information that CDC replys is examined, and finishes the checking work that USER entrusts, as shown in figure 12.Specifically describe as follows:
The initial period of checking, TPA generation authorization information checkM (UInfo, FInfo, RInfo (FRoot, RRoot[i], NRoot[i, j])).CheckM checking message is made up of three parts: the one, and user's personal information UInfo, UInfo can help the owner of the data file that the rapidly accurate location of CDC need check; The 2nd, need the data file information FInfo of examination, FInfo is clear and definite points out the data file that need check; The 3rd, concrete Merkle Hash Tree root node information RInfo, RInfo is by data file Merkle Hash Tree root node value FRoot, appointment frame Merkle Hash Tree root node value RRoot[i] and specified node Merkle Hash Tree root node value NRoot[i, j] three parts composition.Scheme allows TPA to select one, two or three in the Merkle Hash Tree root node information to examine arbitrarily.After checking message checkM generated, TPA can send checkM to CDC by communication network.After CDC receives authorization information, at first can decompose, determine the related content that TPA will examine checkM.Then, CDC uses relevant Merkle Hash Tree generating algorithm to regenerate and calculate corresponding Merkle Hash Tree and root node value thereof.Finish after the generation work of all authorization informations, CDC will generate checking return information respondM (UInfo, FInfo, RInfo ' (FRoot ', RRoot ' [i], NRoot ' [i, j])).Wherein, preceding two parts of the content of UInfo and FInfo and checkM authorization information are consistent, RInfo ' then returns the relevant Merkle Hash Tree root node value that TPA need examine, comprise the frame Merkle Hash Tree root node value RRoot ' [i] of designated data files Merkle Hash Tree root node value FRoot ', appointment and node M erkle Hash Tree root node value NRoot ' [i, j] three parts of appointment.At last, TPA will compare with the relevant Merkle Hash Tree root node value of self preserving after receiving respondM checking return information.If unanimity is then examined failure, data integrity and consistent with previous version as a result.Otherwise, illustrating that then data are inconsistent with previous version, TPA will require CDC to carry out abnormality processing such as the copy recovery operation of data file, and examination result is informed USER.
The dynamic operation of data
Under the cloud computing environment, three kinds of basic dynamic operations that most of data file is frequently carried out are respectively: the deletion of the insertion of data, the modification of data and data.In this programme, data file is through after these three kinds of dynamic operations, and the authorization information that all data files are relevant must be regenerated by CDC.The present invention will specifically introduce data file through after the dynamic operation, three kinds of restructing algorithms of MerkleHash Tree data structure.
The insertion operation of data
Inserting operation is the most basic dynamic operation content of data file, also is dynamic operation the most complicated in the scheme with respect to modification and deletion action.Scheme will be inserted to operate to be defined as and insert a new data block after certain certain data block of data file.The scheme regulation, new data block of inserting and the certain data block before it will be stored among the same server node.Therefore, the logical organization that does not change single server node M erkle Hash Tree is operated in the insertion of data in the scheme.For example, at i data block f of data file iInsert data block f afterwards i', the operation steps of scheme can be described below:
1, CDC calculates the data block f that makes new advances i' cryptographic hash H (f i'), and for need insert the data block f of operation i' establishment label information LTag (f i').
2, generate and move insertion handling function Insert (f i', H (f i'), LTag (f i)), by LTag (f i) particular location that the locator data piece inserts, carry out the insertion operation of data block.
3, call Merkle Hash Tree and insert handling function MerkleInsert (h (f i), h (f i'), LTag (f i)), this function is to h (f i) and h (f i') carry out attended operation, and the numerical value after linking is carried out the Hash operation once more.CDC uses the new result of Hash operation generation to substitute LTag (f among the original node M erkle Hash Tree then i) cryptographic hash of nodes of locations.It should be noted that to link back numerical value to carry out the result that Hash obtains no longer be the leaf node of Merkle Hash Tree, but as by h (f i) and h (f i') two intermediate nodes that leaf node generates.Figure 13 depicted in greater detail the change procedure of this step in node M erkle Hash Tree.
4, call f i' the Merkle Hash Tree generating function CreateMerkle () of place node, generate new node M erkleHash Tree, and calculate its root node value NRoot[i, j], the node at i representative data piece place wherein, j representative data piece place frame.
5, call f i' the Merkle Hash Tree generating function CreateMerkle () of place frame, generate new frame Merkle Hash Tree, and calculate its root node value RRoot[i], the frame at i representative data piece place wherein.
6, call file Merkle Hash Tree generating function CreateMerkle () again, generate new file Merkle Hash Tree, and calculate its root node value FRoot.
7, CDC reports lastest imformation to TPA, transmission update request information UpdateRequest (UInfo, FInfo, RInfo (FRoot, RRoot[i], NRoot[i, j]).
8, TPA upgrades the authorization information of data file according to the particular content of update request information, in order to the usefulness of examination later on.
The retouching operation of data
Retouching operation is Data Dynamic operation the most frequent during the cloud meter is used.Scheme is defined as the data block of using new data block that needs are revised with basic data modification operation and replaces.The scheme regulation, amended data block is identical with its modification memory node before.Therefore, the retouching operation of data does not change the logical organization of single server node M erkle Hash Tree in the scheme.For example to the i blocks of data piece f of data file iMake amendment, i blocks of data piece will become f after revising i', the concrete operations step of scheme is described below:
1, CDC calculates the data block f that makes new advances i' cryptographic hash H (f i'), and use f iLabel information LTag (f i) as f i' label information LTag (f i').
2, generate and move renewal handling function Update (f i', H (f i'), LTag (f i)), by LTag (f i) particular location of the data block that need make amendment of location, carry out the retouching operation of data block.
3, call Merkle Hash Tree retouching operation function MerkleUpdate (f i', H (f i'), LTag (f i')), by LTag (f i') particular location of the data block that need make amendment of location, with H (f i') alternative H (f i).Figure 14 depicted in greater detail the change procedure of this step in node M erkle Hash Tree.
4, call f i' the Merkle Hash Tree generating function CreateMerkle () of place node, generate new node M erkleHash Tree, and calculate its root node value NRoot[i, j], the node at i representative data piece place wherein, j representative data piece place frame.
5, call f i' the Merkle Hash Tree generating function CreateMerkle () of place frame, generate new frame Merkle Hash Tree, and calculate its corresponding root node value RRoot[i], the frame at i representative data piece place wherein.
6, call file Merkle Hash Tree generating function CreateMerkle (), generate new file Merkle Hash Tree, and calculate its root node value FRoot.
7, CDC reports lastest imformation to TPA, transmission update request information UpdateRequest (UInfo, FInfo, RInfo (FRoot, RRoot[i], NRoot[i, j]).Wherein UInfo, FInfo are identical with 3.4 joint definition with RInfo.
8, TPA upgrades the authorization information of data file according to the particular content of update request information, in order to the usefulness of examination later on.
The deletion action of data
About the deletion action of data, scheme is defined as it data block after data file certain data block is deleted.For example with i data block f of data file iData block f afterwards I+1Deletion, the concrete operations step of scheme is described below:
1, generates and moves deletion action function Delete (f I+1, H (f I+1), LTag (f I+1)), by LTag (f I+1) particular location of the data block that need delete of location, carry out the deletion action of data block.
2, call Merkle Hash Tree deletion action function MerkleDelete (LTag (f I+1), LTag (f i)) function, with LTag (f I+1) locational h (f I+1) deletion, use h (f i) the alternative h (f of value i) and h (f I+1) the Merkle Hash Tree intermediate node h (h (f that generates of Hash i)+h (f I+1)).Make Merkle Hash Tree reduce by two leaf nodes though it should be noted that deletion action, the basic logical organization of Merkle Hash Tree does not change.Figure 15 depicted in greater detail the change procedure of this step in node M erkle Hash Tree.
3, call f iThe Merkle Hash Tree generating function CreateMerkle () of place node generates new node M erkle Hash Tree, and calculates its root node value NRoot[i, j], the node at i representative data piece place wherein, the frame at j representative data piece place.
4, call f iThe Merkle Hash Tree generating function CreateMerkle () of place frame generates new frame Merkle Hash Tree, and calculates its corresponding root node value RRoot[i], the frame at i representative data piece place wherein.
5, call file Merkle Hash Tree generating function CreateMerkle (), generate new file Merkle Hash Tree, and calculate its root node value FRoot.
6, CDC reports lastest imformation to TPA, transmission update request information UpdateRequest (UInfo, FInfo, RInfo (FRoot, RRoot[i], NRoot[i, j]).Wherein UInfo, FInfo are identical with 3.4 joint definition with RInfo.
7, TPA upgrades the data file of USER according to the particular content of update request information, in order to the usefulness of examination later on.
In addition, if scheme regulation institute deleted data piece is the unique data piece of its place node, then step 2 will directly be operated frame Merkle Hash Tree, and skips steps 3 and step 4 directly begin to carry out from step 5 then.
The analysis of scheme
Because scheme is based on certain safety hypothesis (communication channel is reliable, TPA is credible and CDC loyalty etc.).Therefore, the checking audit work with data file that USER not only can be relieved is transferred to TPA and is handled, and the maintenance of authorization information and evaluation work also needn't be again by the USER do-it-yourselfs.Scheme has made full use of huge computational resource and the storage space of CDC under the cloud computing environment, has not only brought into play extensive, the virtualized advantage that cloud computing had to greatest extent, has saved a large amount of user resources.And, compare scholars' achievement in research before, scheme can be supported master data dynamic operation all under the cloud computing environment (insertion, deletion and modification etc.).Aspect the integrality and consistency checking of data, scheme has used the root node value of the MerkleHash Tree that divides level as authorization information, and the characteristic distributions of this data structure data storage under advantage that is had aspect data integrity and the conforming checking and cloud computing environment is organically merged.Aspect the calculating of the structure of Merkle Hash Tree and root node value, scheme has been utilized the LTag label that is marked with the data block store geographical location information dexterously, not only reduced effectively in the computation process of Merkle Hash Tree root node value because the spent plenty of time of CDC intercommunication, USER is carried out after the dynamic operation the data file, system needn't re-use the cryptographic hash of data file all data blocks and carry out the reconstruct of file Merkle Hash Tree and the calculating of root node value thereof, has promoted the dynamic operation generating rate of data file authorization information afterwards effectively.In addition, LTag label and the proposition of Merkle Hash Tree design philosophy by different level, the scheme that makes allows TPA that the single server node or the frame of storage data block among the CDC are carried out auditing verification.Compare the scheme of not carrying out layered structure Merkle Hash Tree, this programme can pass through the checkM authorization information, and particular memory node or frame are verified.Occur when inconsistent in checking, quickly and accurately the locate failure node.Simultaneously, aspect MerkleHash Tree reconstruct, owing to do not need to use all data blocks of file to carry out the reconstruct of file Merkle Hash Tree, so scheme is having bigger odds for effectiveness aspect the time.Compare with scholars' achievement in research before, scheme of the present invention has certain odds for effectiveness.
Table 1 listed scheme described herein and scholars' before achievement in research the support of dynamic operation, public audit the time complexity of support, verification of data integrity and the comparing result of the aspects such as support of the inconsistent server node of locator data piece.
The contrast of table 1 scheme
Figure BSA00000880170000151
In the table 1, suggest plans in the prior art 2 and only support the integrity verification of limited quantity, do not support the insertion operation of data.Document does not provide clearly scheme implementation process design.By the table shown in data the contrast situation as can be known, the present invention program the support of dynamic operation, public audit support and the time complexity of verification of data integrity aspect and no less than previous achievement in research, and aspect the support of inconsistent server node location, having the advantage that other achievements in research do not have.
The Hash time of supposing single data block is T h, the structure time of Merkle Hash Tree is (LeafA-1) T m(wherein LeafA is the leaf node number of Merkle Hash Tree), the unit information call duration time between the same frame interior nodes is T n, the unit call duration time between the frame is T rThen scheme can be summarized as follows in the time overhead computing formula of the dynamic operations such as insertion, modification and deletion of the structure of file Merkle Hash Tree and data file:
The file Merkle Hash Tree of data file structure time T in the file preprocessing process CreatMHTCalculating, shown in formula 3-1.
T CreateMHT = Σ i = 1 RackA Σ j = 1 Rac k i NA [ T m × ( Rack i N j BA - 1 ) ] + Σ i = 1 RackA [ T m × ( RackNA i - 1 ) ] - - - ( 1 ) + T m × ( RackA - 1 ) + Σ i = 1 RackA ( T n × RackNA i ) + T r × RackA + T h × FileBA
Wherein, RackA is the frame sum that data block distributes, Rack iNA is the node sum in i the frame, Rack iN jBA is the data block total number of j node in i the frame, RackNA iBe the node sum of i frame, FileBlockA is the data block total number of file.
The insertion running time of data file is expended T InserThe calculating of t, shown in formula 3-2:
T Insert=T m×(Rack iN jBA-1)+T m×(RackNA i-1) (2)
+T m×(RackA-1)+T n×RackNA i+T r×RackA+T h
The deletion action time consumption T of data file DeleteCalculating, shown in formula 3-3:
T Delete=T m×(Rack iN jBA-1)+T m×(RackNA i-1) (3)
+T m×(RackA-1)+T n×RackNA i+T r×RackA
The retouching operation time consumption T of data file UpdateCalculating, shown in formula 3-4:
T Update=T m×(Rack iN jBA-1)+T m×(RackNA i-1) (4)
+T m×(RackA-1)+T n×RackNA i+T r×RackA+T h
In addition, this paper also uses the OpenSSL0.9.81 function library to carry out the relevant emulation experiment code compiling of scheme.Experimental situation is for to move Ubuntu10.04 on the VMware Workstation6.0 virtual machine that Windows XP operating system is built, processor is a Pentium Dual Core E5300 processor, and the memory size of distribution is 0.5GB.File size is 1GB, and the branch block size is 0.5MB.
Emulation experiment has been described under the different situation of data block distribution node quantity, data file after deleting respectively, revise and inserting operation, the described with different levels Merkle Hash Tree of this paper scheme reconstruct with do not carry out the Merkle Hash Tree reconstruct of layering in time-related contrast situation.
Data file is carried out the reconstitution time contrast situation of Merkle Hash Tree after the deletion action, as shown in figure 16.
The reconstitution time contrast situation that data file is made amendment and operated back Merkle Hash Tree, as shown in figure 17.
Data file is inserted the reconstitution time contrast situation of operation back Merkle Hash Tree, as shown in figure 18.
For making contrast effect more obvious, all data blocks of experimental hypothesis data file on average are stored among the different nodes of same frame.Therefore, when number of nodes is 1, with different levels Merkle Hash Tree structure also can regard as all data blocks of data file are carried out the file Merkle Hash Tree that the Hash operation makes up data file, promptly with not carry out with different levels Merkle HashTree structure identical.By above simulation result as can be known, no matter be that the data file has been carried out deletion action, retouching operation or inserted operation, with different levels Merkle Hash Tree structure is obviously saved time than the Merkle Hash Tree structure of not layering.And in certain number of nodes interval range, the structure time of Merkle Hash Tree also can reduce along with increasing of number of nodes by different level.Therefore, with different levels Merkle Hash Tree structural scheme that this paper adopted is compared with previous scheme on the expense of time has remarkable advantages.
Aspect the expense of storage space, scheme is the LTag label that each data block has additionally been added one 5 byte in the starting stage of data file storage, and for all data files have all generated the tabulation of LTag label information, the storage size that LTag label and label information tabulation take is that the piecemeal quantity by data file is determined.Authorization information in the scheme is made up of node M erkle Hash Tree root node value, frame Merkle Hash Tree root node value and file Merkle Hash Tree root node value three parts.Because this programme uses the hash function of SHA-1 as the operation of data block Hash, so the root node value size of each MerkleHash Tree is 160 bits, i.e. 20 bytes.The size that the authorization information of data file takies storage space is node and the decision of frame quantity that is distributed by data block.In addition, at the initial phase of scheme, CDC will temporarily keep the cryptographic hash of all data blocks.In the reconstruction stage of dynamic operation Merkle Hash Tree, CDC is with the cryptographic hash of all data blocks of temporary transient reservation operations data piece place server node.At the generation phase of respondM authentication response information, CDC also will temporarily keep the cryptographic hash of all data blocks of needs checking node, frame or file.All temporarily stored intermediate records all will be along with the generation of the reconstruct of Merkle Hash Tree and authorization information and destroy.
The above only is a best mode for carrying out the invention, anyly is familiar with those skilled in the art in the technical scope that the present invention discloses, and the simple change of the technical scheme that obtains or equivalence are replaced and all fallen within the scope of protection of the present invention with may be obvious that.

Claims (7)

1. the Data Dynamic operation verifiability method based on Hash tree is characterized in that, may further comprise the steps:
The pre-service of A file
File carries out before the pre-service, and USER can propose the storage request of data file to CDC, and CDC carries out authentication according to its predefined access control rule to USER, just can obtain to carry out the authority of file storage by the validated user of CDC authentication;
USER is by after the authentication of CDC, and CDC just begins to receive the data file that USER need store, and it is carried out pre-service:
(1) at first, data file will be divided into several identical data blocks of size, F → (f1, f2, f3, f4, f5, f6 ... fn),
(2) then, CDC each data block is carried out Hash operation and obtain the cryptographic hash H (fi) of all data blocks (1≤i≤n),
(3) after the Hash operation, data file can be designated as: F ' → (f1+H (f1), f2+H (f2), f3+H (f3) ..., fn+H (fn)), CDC will temporarily preserve the cryptographic hash of all data blocks, for the verification msg structure Merkle Hash Tree that next constructs file prepares
(4) for data block being carried out unique geographic position mark, scheme has been added the location tags (LTag) of one 5 byte-sized for each data block, label is by the order label information of 2 bytes, the frame label information of 1 byte and the vertex ticks information of 2 bytes are formed, what the order label information write down is the order numbering of data block in all data blocks, what the frame label information write down is the concrete frame numbering of this data block store in data center, vertex ticks information is then indicated the concrete numbering of the node server of data block store, CDC can safeguard a LTag list of labels for each data file, the LTag label information of all data blocks of record this document;
(5) after the label interpolation finishes, all data blocks will be deposited in data center by CDC, the geographical location information that data block is stored also can be recorded among the LTag label of data block oneself, the LTag tag record for the treatment of all data blocks finishes, CDC will create a LTag label information tabulation that comprises all data blocks geographical location information for data file
(6) subsequently, carry out the structure of key data structure Merkle Hash Tree,
1) at first, CDC takes out the frame value Rack (LTag (fi)) of all data blocks of file from the Ltag list of labels, the data block that is stored in the different frames is divided, and deposits the ListRack[of corresponding frame in] tabulation,
2) then, CDC is successively from the ListRack[of different frames] take out the nodal information in all data block Ltag labels the tabulation, the data block that is stored in the different nodes of same machine frame is divided once more,
3) then, to being stored in the data block of same machine frame same node point, CDC will take out the order information in their Ltag labels, arrange according to the order of sequence, as leaf node structure node Merkle Hash Tree, and calculate its root node value with the cryptographic hash of data block, if node only has a data block, then the cryptographic hash of this data block just is the root node value of this node M erkle Hash Tree,, be designated as NRoot[i, j], wherein i represents place frame numbering, and j represents the place node serial number
After completion of calculation work of the file that stores all nodes Merkle Hash Tree structure, and the root node value, CDC will be stored in the same rack on all nodes Merkle Hash Tree root node in order to sort by value, will constructed as a leaf node of each rack Merkle Hash Tree, and calculate its root value, if rack only has one node to store data files, then the node Merkle Hash Tree root value will be for the rack Merkle Hash Tree root node, denoted as RRoot [i], where i represents the rack number,
5) last, root node value to the organic frame Merkle HashTree of institute is arranged according to the order of sequence, their root node value is constructed the Merkle Hash Tree of file as leaf node, and calculate its root node value (if a frame storing data files is only arranged, then the Merkle Hash Tree root node value of this frame just is the root node value of file Merkle Hash Tree), be designated as FRoot;
The checking of B data file
(1) at first, CDC finishes the data file storage work of USER, and submit the relevant authorization information of USER institute storing data files to TPA, data file is before being deleted by USER, as long as change has taken place, CDC will be responsible for data file and generate up-to-date authorization information, and carry out real-time update with TPA
(2) then, the storage of CDC notice USER data file and the generation work of authorization information are finished, and inform that USER can begin to entrust TPA that the data file is carried out integrality and conforming checking,
(3) USER can select horse back or alternative other times and TPA to communicate, and entrusts TPA that the data file is carried out integrality and consistency checking,
(4) TPA is after request is entrusted in the audit of receiving USER, can carry out regular or irregular auditing verification to the data file according to the demand of USER, and all audit operation are kept checking daily records check after day for USER, in the proof procedure, if the situation of data file authentication failed, then TPA will inform USER by communication modes such as mail or notes, require CDC to carry out the remedial measuress such as copy recovery of data file simultaneously
The proof procedure of data file is meant that USER entrusts regular irregular data file integrality and the consistance examination that TPA carried out in the scheme, TPA is by proposing the integrality and the consistency checking request of data file to CDC, and the authorization information that CDC replys examined, finish the checking work that USER entrusts, specifically describe as follows:
1) Yan Zheng initial period, TPA generates authorization information checkM (UInfo, FInfo, RInfo (FRoot, RRoot[i], NRoot[i, j])), checkM checking message is made up of three parts: the one, and user's personal information UInfo, UInfo can help the owner of the data file that the rapidly accurate location of CDC need check; The 2nd, need the data file information FInfo of examination, FInfo is clear and definite points out the data file that need check; The 3rd, concrete Merkle Hash Tree root node information RInfo, RInfo is by data file Merkle Hash Tree root node value FRoot, appointment frame Merkle Hash Tree root node value RRoot[i] and specified node Merkle Hash Tree root node value NRoot[i, j] three parts composition
2) after checking message checkM generated, TPA can send checkM to CDC by communication network,
3) after CDC receives authorization information, at first can decompose, determine the related content that TPA will examine checkM,
4) then, the relevant Merkle Hash Tree generating algorithm that CDC uses above-mentioned A joint to introduce regenerates and calculates corresponding Merkle Hash Tree and root node value thereof,
5) finish after the generation work of all authorization informations, CDC will generate checking return information respondM (UInfo, FInfo, RInfo ' (FRoot ', RRoot ' [i], NRoot ' [i, j])), wherein, preceding two parts of the content of UInfo and FInfo and checkM authorization information are consistent, and RInfo ' then returns the relevant Merkle Hash Tree root node value that TPA need examine, and comprise designated data files Merkle Hash Tree root node value FRoot ', node M erkle Hash Tree root node value the NRoot ' [i of the frame Merkle Hash Tree root node value RRoot ' [i] of appointment and appointment, j] three parts
6) last, TPA is after receiving respondM checking return information, to compare with the relevant Merkle Hash Tree root node value of self preserving, if unanimity is then examined failure, data integrity and consistent with previous version as a result, otherwise, illustrate that then data are inconsistent with previous version, TPA will require CDC to carry out abnormality processing such as the copy recovery operation of data file, and examination result is informed USER;
The dynamic operation of C data
The insertion operation of C1 data
Inserting operation is the most basic dynamic operation content of data file, with respect to modification and deletion action also is dynamic operation the most complicated in the scheme, insert data block fi ' after i data block fi of data file, the operation steps of scheme can be described below:
(1) CDC calculates the cryptographic hash H (fi ') of the data block fi ' make new advances, and for the data block fi ' that need insert operation creates label information LTag (fi '),
(2) generate and handling function Insert (fi ', H (fi '), LTag (fi) are inserted in operation), by the particular location that LTag (fi) locator data piece inserts, carry out the insertion operation of data block,
(3) call Merkle Hash Tree and insert handling function MerkleInsert (h (fi), h (fi '), LTag (fi)), this function carries out attended operation to h (fi) and h (fi '), and the numerical value after linking is carried out Hash once more operate, CDC uses the new result of Hash operation generation to substitute the cryptographic hash of LTag (fi) nodes of locations among the original node M erkle Hash Tree then, linking back numerical value, to carry out the result that Hash obtains no longer be the leaf node of Merkle Hash Tree, but as intermediate node by two leaf node generations of h (fi) and h (fi ')
(4) call the Merkle Hash Tree generating function CreateMerkle () of fi ' place node, generate new node M erkleHash Tree, and calculate its root node value NRoot[i, j], the node at i representative data piece place wherein, j representative data piece place frame
(5) call the Merkle Hash Tree generating function CreateMerkle () of fi ' place frame, generate new frame MerkleHash Tree, and calculate its root node value RRoot[i], the frame at i representative data piece place wherein,
(6) call file Merkle Hash Tree generating function CreateMerkle () again, generate new file Merkle HashTree, and calculate its root node value FRoot,
(7) CDC reports lastest imformation to TPA, transmission update request information UpdateRequest (UInfo, FInfo, RInfo (FRoot, RRoot[i], NRoot[i, j]), wherein UInfo, FInfo are identical with the definition of B joint with RInfo,,
(8) TPA upgrades the authorization information of data file according to the particular content of update request information, in order to the usefulness of examination later on,
The retouching operation of C2 data
Retouching operation is Data Dynamic operation the most frequent during the cloud meter is used, and the i blocks of data piece fi of data file is made amendment, and i blocks of data piece will become fi ' after revising, and the concrete operations step of scheme is described below:
(1) CDC calculates the cryptographic hash H (fi ') of the data block fi ' make new advances, and uses the label information LTag (fi) of fi as the label information LTag of fi ' (fi '),
(2) generate and handling function Update (fi ', H (fi '), LTag (fi) are upgraded in operation), the particular location of the data block that need make amendment by LTag (fi) location carries out the retouching operation of data block,
(3) call Merkle Hash Tree retouching operation function MerkleUpdate (fi ', H (fi '), LTag (fi ')), the particular location of the data block that need make amendment by LTag (fi ') location substitutes H (fi) with H (fi '),
(4) call the Merkle Hash Tree generating function CreateMerkle () of fi ' place node, generate new node M erkle HashTree, and calculate its root node value NRoot[i, j], the node at i representative data piece place wherein, j representative data piece place frame
(5) call the Merkle Hash Tree generating function CreateMerkle () of fi ' place frame, generate new frame Merkle Hash Tree, and calculate its corresponding root node value RRoot[i], the frame at i representative data piece place wherein,
(6) call file Merkle Hash Tree generating function CreateMerkle (), generate new file Merkle Hash Tree, and calculate its root node value FRoot,
(7) CDC reports lastest imformation to TPA, transmission update request information UpdateRequest (UInfo, FInfo, RInfo (FRoot, RRoot[i], NRoot[i, j]), wherein UInfo, FInfo are identical with the definition of B joint with RInfo,
(8) TPA upgrades the authorization information of data file according to the particular content of update request information, in order to the usefulness of examination later on;
The deletion action of C3 data
For example with the deletion of the data block fi+1 after i the data block fi of data file, the concrete operations step of scheme is described below:
(1) generate and operation deletion action function Delete (fi+1, H (fi+1), LTag (fi+1)), the particular location of the data block that need delete by LTag (fi+1) location carries out the deletion action of data block,
(2) call Merkle Hash Tree deletion action function MerkleDelete (LTag (fi+1), LTag (fi)) function, the locational h of LTag (fi+1) (fi+1) is deleted, use h (fi) value to substitute the Merkle Hash Tree intermediate node h (h (fi)+h (fi+1) of h (fi) and the generation of h (fi+1) Hash, though deletion action makes Merkle Hash Tree reduce by two leaf nodes, but the basic logical organization of Merkle Hash Tree does not change
(3) call the Merkle Hash Tree generating function CreateMerkle () of fi place node, generate new node M erkle Hash Tree, and calculate its root node value NRoot[i, j], the node at i representative data piece place wherein, the frame at j representative data piece place
(4) call the Merkle Hash Tree generating function CreateMerkle () of fi place frame, generate new frame Merkle Hash Tree, and calculate its corresponding root node value RRoot[i], the frame at i representative data piece place wherein,
(5) call file Merkle Hash Tree generating function CreateMerkle (), generate new file Merkle Hash Tree, and calculate its root node value FRoot,
(6) CDC reports lastest imformation to TPA, transmission update request information UpdateRequest (UInfo, FInfo, RInfo (FRoot, RRoot[i], NRoot[i, j]), wherein UInfo, FInfo are identical with the definition of B joint with RInfo,
(7) TPA upgrades the data file of USER according to the particular content of update request information, in order to the usefulness of examination later on.
2. the Data Dynamic operation verifiability method based on Hash tree according to claim 1, it is characterized in that, (2) CDC among the described A carries out the Hash operation to each data block and obtains the cryptographic hash of all data blocks, and Scheme Choice SHA-1 carries out the Hash operation as hash function to data block.
3. the Data Dynamic operation verifiability method based on Hash tree according to claim 1, it is characterized in that, the structure of key data structure Merkle Hash Tree is carried out in (6) among the described A, the scheme regulation, the structure that the cryptographic hash of all data blocks of data file is set as the leaf node of Merkle Hash Tree, scheme scholar different from the past only carries out the structure of file Merkle Hash Tree according to the order information series arrangement of data block, but will be with reference to the LTag label information of all data blocks, according to first node, frame again, the order of back file is the node M erkle Hash Tree of construction data file successively, frame Merkle Hash Tree and file Merkle Hash Tree, and calculate its root node value.
4. the Data Dynamic operation verifiability method based on Hash tree according to claim 1, it is characterized in that, in (4) among the described B 1), the initial period of checking, TPA generate authorization information checkM (UInfo, FInfo, RInfo (FRoot, RRoot[i], NRoot[i, j])), scheme allows TPA to select one, two or three in the Merkle Hash Tree root node information to examine arbitrarily.
5. the Data Dynamic operation verifiability method based on Hash tree according to claim 1, it is characterized in that, the insertion operation of the data among the described C1, scheme will be inserted to operate to be defined as and insert a new data block after certain certain data block of data file, the scheme regulation, new data block of inserting and the certain data block before it will be stored among the same server node, and the insertion of data operation does not change the logical organization of single server node M erkle Hash Tree in the scheme.
6. the Data Dynamic operation verifiability method based on Hash tree according to claim 1, it is characterized in that, the retouching operation of the data among the described C2, scheme is defined as the data block of using new data block that needs are revised with basic data modification operation and replaces, the scheme regulation, amended data block is identical with its modification memory node before, and the retouching operation of data does not change the logical organization of single server node M erkle Hash Tree in the scheme.
7. the Data Dynamic operation verifiability method based on Hash tree according to claim 1, it is characterized in that, the deletion action of the data among the described C3, scheme is defined as it data block after data file certain data block is deleted, if scheme regulation institute deleted data piece is the unique data piece of its place node, then step (2) will directly be operated frame MerkleHash Tree among the C3, and skips steps (3) and step (4) directly begin to carry out from step (5) then.
CN2013101325650A 2013-04-09 2013-04-09 Hash tree-based data dynamic operation verifiability method Pending CN103218574A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2013101325650A CN103218574A (en) 2013-04-09 2013-04-09 Hash tree-based data dynamic operation verifiability method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2013101325650A CN103218574A (en) 2013-04-09 2013-04-09 Hash tree-based data dynamic operation verifiability method

Publications (1)

Publication Number Publication Date
CN103218574A true CN103218574A (en) 2013-07-24

Family

ID=48816346

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2013101325650A Pending CN103218574A (en) 2013-04-09 2013-04-09 Hash tree-based data dynamic operation verifiability method

Country Status (1)

Country Link
CN (1) CN103218574A (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104424404A (en) * 2013-09-07 2015-03-18 镇江金软计算机科技有限责任公司 Implementation method for realizing third-party escrow system through authorization management
CN104899525A (en) * 2015-06-12 2015-09-09 电子科技大学 Cloud data integrity proving scheme with improved dynamic operations
WO2015131394A1 (en) * 2014-03-07 2015-09-11 Nokia Technologies Oy Method and apparatus for verifying processed data
CN105303122A (en) * 2015-10-13 2016-02-03 北京大学 Method for realizing cloud locking of sensitive data on basis of reconstruction technology
CN105787389A (en) * 2016-03-02 2016-07-20 四川师范大学 Cloud file integrity public audit evidence generating method and public auditing method
CN105812363A (en) * 2016-03-09 2016-07-27 成都爆米花信息技术有限公司 Data secure modification method for cloud storage space
CN106055597A (en) * 2016-05-24 2016-10-26 布比(北京)网络技术有限公司 Digital transaction system, and account information query method therefor
CN106230880A (en) * 2016-07-12 2016-12-14 何晓行 A kind of storage method of data and application server
CN106301789A (en) * 2016-08-16 2017-01-04 电子科技大学 Apply the dynamic verification method of the cloud storage data that linear homomorphism based on lattice signs
CN106372075A (en) * 2015-07-21 2017-02-01 杭州华为数字技术有限公司 Tree structure based data comparison and processing methods and apparatuses
US9589153B2 (en) 2014-08-15 2017-03-07 International Business Machines Corporation Securing integrity and consistency of a cloud storage service with efficient client operations
CN106650503A (en) * 2016-12-09 2017-05-10 南京理工大学 Cloud side data integrity verification and restoration method based on IDA
CN107172071A (en) * 2017-06-19 2017-09-15 陕西师范大学 A kind of cloud Data Audit method and system based on attribute
CN108270790A (en) * 2018-01-29 2018-07-10 佳木斯大学附属第医院 A kind of radiotherapy information management system and management method
CN108923932A (en) * 2018-07-10 2018-11-30 东北大学 A kind of decentralization co-verification model and verification algorithm
CN109088850A (en) * 2018-06-22 2018-12-25 陕西师范大学 Batch cloud auditing method based on Lucas sequence positioning wrong file
CN109801066A (en) * 2018-12-13 2019-05-24 中国农业大学 The implementation method and device of long-range storage service
US10303887B2 (en) 2015-09-14 2019-05-28 T0.Com, Inc. Data verification methods and systems using a hash tree, such as a time-centric merkle hash tree
US10496313B2 (en) 2014-09-22 2019-12-03 Hewlett Packard Enterprise Development Lp Identification of content-defined chunk boundaries
CN110688377A (en) * 2019-08-30 2020-01-14 阿里巴巴集团控股有限公司 Method and device for updating state Merck tree
CN111164934A (en) * 2017-08-07 2020-05-15 西门子股份公司 Pruning of authentication trees
CN111596862A (en) * 2020-05-20 2020-08-28 南京如般量子科技有限公司 Independent optimization method and system for block chain historical transaction data
CN111625258A (en) * 2020-05-22 2020-09-04 深圳前海微众银行股份有限公司 Mercker tree updating method, device, equipment and readable storage medium
WO2021007863A1 (en) 2019-07-18 2021-01-21 Nokia Technologies Oy Integrity auditing for multi-copy storage
US10937083B2 (en) 2017-07-03 2021-03-02 Medici Ventures, Inc. Decentralized trading system for fair ordering and matching of trades received at multiple network nodes and matched by multiple network nodes within decentralized trading system
US10992459B2 (en) 2019-08-30 2021-04-27 Advanced New Technologies Co., Ltd. Updating a state Merkle tree
CN114223233A (en) * 2019-08-13 2022-03-22 上海诺基亚贝尔股份有限公司 Data security for network slice management

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101997929A (en) * 2010-11-29 2011-03-30 北京卓微天成科技咨询有限公司 Data access method, device and system for cloud storage
CN102546755A (en) * 2011-12-12 2012-07-04 华中科技大学 Data storage method of cloud storage system
US20130083926A1 (en) * 2011-09-30 2013-04-04 Los Alamos National Security, Llc Quantum key management

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101997929A (en) * 2010-11-29 2011-03-30 北京卓微天成科技咨询有限公司 Data access method, device and system for cloud storage
US20130083926A1 (en) * 2011-09-30 2013-04-04 Los Alamos National Security, Llc Quantum key management
CN102546755A (en) * 2011-12-12 2012-07-04 华中科技大学 Data storage method of cloud storage system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SHUAI HAN ET AL: "《Ensuring data storage security through a novel third party auditor scheme in cloud computing》", 《2011 IEEE INTERNATIONAL CONFERENCE ON CLOUD COMPUTING AND INTELLIGENCE SYSTEMS》 *
韩帅: "《基于云计算的数据安全关键技术研究》", 《中国优秀硕士学位论文全文数据库》 *
颜湘涛 等: "《基于哈希树的云存储完整性检测算法》", 《计算机科学》 *

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104424404A (en) * 2013-09-07 2015-03-18 镇江金软计算机科技有限责任公司 Implementation method for realizing third-party escrow system through authorization management
WO2015131394A1 (en) * 2014-03-07 2015-09-11 Nokia Technologies Oy Method and apparatus for verifying processed data
US9589153B2 (en) 2014-08-15 2017-03-07 International Business Machines Corporation Securing integrity and consistency of a cloud storage service with efficient client operations
US10496313B2 (en) 2014-09-22 2019-12-03 Hewlett Packard Enterprise Development Lp Identification of content-defined chunk boundaries
CN104899525A (en) * 2015-06-12 2015-09-09 电子科技大学 Cloud data integrity proving scheme with improved dynamic operations
CN106372075A (en) * 2015-07-21 2017-02-01 杭州华为数字技术有限公司 Tree structure based data comparison and processing methods and apparatuses
CN106372075B (en) * 2015-07-21 2020-01-10 杭州华为数字技术有限公司 Data comparison and processing method and device based on tree structure
US10303887B2 (en) 2015-09-14 2019-05-28 T0.Com, Inc. Data verification methods and systems using a hash tree, such as a time-centric merkle hash tree
US10831902B2 (en) 2015-09-14 2020-11-10 tZERO Group, Inc. Data verification methods and systems using a hash tree, such as a time-centric Merkle hash tree
CN105303122A (en) * 2015-10-13 2016-02-03 北京大学 Method for realizing cloud locking of sensitive data on basis of reconstruction technology
WO2017063323A1 (en) * 2015-10-13 2017-04-20 北京大学 Method for implementing cloud locking of sensitive data based on reconstruction technology
CN105303122B (en) * 2015-10-13 2018-02-09 北京大学 The method that the locking of sensitive data high in the clouds is realized based on reconfiguration technique
CN105787389B (en) * 2016-03-02 2018-07-27 四川师范大学 Cloud file integrality public audit evidence generation method and public audit method
CN105787389A (en) * 2016-03-02 2016-07-20 四川师范大学 Cloud file integrity public audit evidence generating method and public auditing method
CN105812363A (en) * 2016-03-09 2016-07-27 成都爆米花信息技术有限公司 Data secure modification method for cloud storage space
CN106055597B (en) * 2016-05-24 2022-05-20 布比(北京)网络技术有限公司 Digital transaction system and account information query method used for same
CN106055597A (en) * 2016-05-24 2016-10-26 布比(北京)网络技术有限公司 Digital transaction system, and account information query method therefor
CN106230880A (en) * 2016-07-12 2016-12-14 何晓行 A kind of storage method of data and application server
CN106301789A (en) * 2016-08-16 2017-01-04 电子科技大学 Apply the dynamic verification method of the cloud storage data that linear homomorphism based on lattice signs
CN106301789B (en) * 2016-08-16 2019-07-09 电子科技大学 Using the dynamic verification method of the cloud storage data of the linear homomorphism signature based on lattice
CN106650503B (en) * 2016-12-09 2019-10-18 南京理工大学 Cloud data integrity validation and restoration methods based on IDA
CN106650503A (en) * 2016-12-09 2017-05-10 南京理工大学 Cloud side data integrity verification and restoration method based on IDA
CN107172071B (en) * 2017-06-19 2020-06-23 陕西师范大学 Attribute-based cloud data auditing method and system
CN107172071A (en) * 2017-06-19 2017-09-15 陕西师范大学 A kind of cloud Data Audit method and system based on attribute
US11948182B2 (en) 2017-07-03 2024-04-02 Tzero Ip, Llc Decentralized trading system for fair ordering and matching of trades received at multiple network nodes and matched by multiple network nodes within decentralized trading system
US10937083B2 (en) 2017-07-03 2021-03-02 Medici Ventures, Inc. Decentralized trading system for fair ordering and matching of trades received at multiple network nodes and matched by multiple network nodes within decentralized trading system
CN111164934A (en) * 2017-08-07 2020-05-15 西门子股份公司 Pruning of authentication trees
CN108270790A (en) * 2018-01-29 2018-07-10 佳木斯大学附属第医院 A kind of radiotherapy information management system and management method
CN108270790B (en) * 2018-01-29 2020-07-10 佳木斯大学附属第一医院 Radiotherapy information management system and management method
CN109088850A (en) * 2018-06-22 2018-12-25 陕西师范大学 Batch cloud auditing method based on Lucas sequence positioning wrong file
CN109088850B (en) * 2018-06-22 2021-06-15 陕西师范大学 Lot cloud auditing method for positioning error files based on Lucas sequence
CN108923932B (en) * 2018-07-10 2020-12-11 东北大学 Decentralized collaborative verification system and verification method
CN108923932A (en) * 2018-07-10 2018-11-30 东北大学 A kind of decentralization co-verification model and verification algorithm
CN109801066A (en) * 2018-12-13 2019-05-24 中国农业大学 The implementation method and device of long-range storage service
WO2021007863A1 (en) 2019-07-18 2021-01-21 Nokia Technologies Oy Integrity auditing for multi-copy storage
EP3999989A4 (en) * 2019-07-18 2023-03-29 Nokia Technologies Oy Integrity auditing for multi-copy storage
CN114223233A (en) * 2019-08-13 2022-03-22 上海诺基亚贝尔股份有限公司 Data security for network slice management
US10992459B2 (en) 2019-08-30 2021-04-27 Advanced New Technologies Co., Ltd. Updating a state Merkle tree
CN110688377B (en) * 2019-08-30 2020-07-17 阿里巴巴集团控股有限公司 Method and device for updating state Merck tree
CN110688377A (en) * 2019-08-30 2020-01-14 阿里巴巴集团控股有限公司 Method and device for updating state Merck tree
CN111596862A (en) * 2020-05-20 2020-08-28 南京如般量子科技有限公司 Independent optimization method and system for block chain historical transaction data
CN111596862B (en) * 2020-05-20 2022-11-01 南京如般量子科技有限公司 Independent optimization method and system for block chain historical transaction data
CN111625258A (en) * 2020-05-22 2020-09-04 深圳前海微众银行股份有限公司 Mercker tree updating method, device, equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN103218574A (en) Hash tree-based data dynamic operation verifiability method
CN112685385B (en) Big data platform for smart city construction
CN107577427B (en) data migration method, device and storage medium for blockchain system
CA2984142C (en) Automatic scaling of resource instance groups within compute clusters
CN105190623B (en) Log record management
CN112270550B (en) New energy power tracing method and system based on blockchain
CN110990150A (en) Tenant management method and system of container cloud platform, electronic device and storage medium
Tsai et al. Towards a scalable and robust multi-tenancy SaaS
CN110572281A (en) Credible log recording method and system based on block chain
CN104160381A (en) Managing tenant-specific data sets in a multi-tenant environment
CN103959264A (en) Managing redundant immutable files using deduplication in storage clouds
CN108694189A (en) The management of the Database Systems of co-ownership
Jovanovic et al. Conceptual data vault model
CN112835977B (en) Database management method and system based on block chain
Zhang et al. Towards building a multi‐datacenter infrastructure for massive remote sensing image processing
CN109150964B (en) Migratable data management method and service migration method
CN114329096A (en) Method and system for processing native map database
CN113407626B (en) Planning management and control method based on blockchain, storage medium and terminal equipment
CN103365740A (en) Data cold standby method and device
Rooney et al. Experiences with managing data ingestion into a corporate datalake
CN112269839B (en) Data storage method and device in blockchain, electronic equipment and storage medium
EP3682390A1 (en) Techniques for coordinating codes for infrastructure modeling
CN112541711A (en) Model construction method and device, computer equipment and storage medium
CN111737655A (en) User authority management method, system and storage medium of cloud management platform
CN116303789A (en) Parallel synchronization method and device for multi-fragment multi-copy database and readable medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20130724