CN111625258B - Mercker tree updating method, device, equipment and readable storage medium - Google Patents

Mercker tree updating method, device, equipment and readable storage medium Download PDF

Info

Publication number
CN111625258B
CN111625258B CN202010453608.5A CN202010453608A CN111625258B CN 111625258 B CN111625258 B CN 111625258B CN 202010453608 A CN202010453608 A CN 202010453608A CN 111625258 B CN111625258 B CN 111625258B
Authority
CN
China
Prior art keywords
target
data
preset
training
hash
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010453608.5A
Other languages
Chinese (zh)
Other versions
CN111625258A (en
Inventor
范力欣
吴锦和
张天豫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN202010453608.5A priority Critical patent/CN111625258B/en
Publication of CN111625258A publication Critical patent/CN111625258A/en
Priority to PCT/CN2021/093397 priority patent/WO2021233182A1/en
Application granted granted Critical
Publication of CN111625258B publication Critical patent/CN111625258B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/65Updates
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/06Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols the encryption apparatus using shift registers or memories for block-wise or stream coding, e.g. DES systems or RC4; Hash functions; Pseudorandom sequence generators
    • H04L9/0643Hash functions, e.g. MD5, SHA, HMAC or f9 MAC

Abstract

The application discloses a method, a device, equipment and a readable storage medium for updating a Mercker tree, wherein the method for updating the Mercker tree comprises the following steps: obtaining updating data and a Merck tree to be updated, determining an original data set and a preset Hash coding model corresponding to the Merck tree to be updated, training and updating the preset Hash coding model based on the updating data and the original data set to obtain a target Hash coding model, and updating a random leaf node layer of the Merck tree to be updated based on the target Hash coding model and the updating data to obtain the target Merck tree. The method and the device solve the technical problem of low computational efficiency in the update of the Mercker tree.

Description

Mercker tree updating method, device, equipment and readable storage medium
Technical Field
The present application relates to the field of artificial intelligence of financial technology (Fintech), and in particular, to a method, an apparatus, a device, and a readable storage medium for updating a mercker tree.
Background
With the continuous development of financial technologies, especially internet technology and finance, more and more technologies (such as distributed, Blockchain, artificial intelligence and the like) are applied to the financial field, but the financial industry also puts higher requirements on the technologies, such as higher requirements on the distribution of backlog of the financial industry.
With the continuous development of computer software and artificial intelligence, the application field of artificial intelligence is becoming more and more extensive, and at present, a merkel (Merkle) tree is a data structure for rapidly verifying data integrity. The principle is that through packet hashing, hash value matching can be traced from leaf nodes to root nodes quickly, and the purpose of reducing the calculation complexity in data query is achieved, however, when a random leaf node of a tacle tree needs to be updated, all nodes of a tree branch where the random leaf node is located need to be updated in addition to the random leaf node, and further, the calculation amount in the tacle tree updating is too large, the calculation complexity is too high, and further, the calculation efficiency in the tacle tree updating is too low, and therefore, the technical problem of low calculation efficiency in the tacle tree updating exists in the prior art.
Disclosure of Invention
The present application mainly aims to provide a method, an apparatus, a device and a readable storage medium for updating a mercker tree, and aims to solve the technical problem of low computation efficiency in the mercker tree updating in the prior art.
To achieve the above object, the present application provides a method for updating a mercker tree, including:
acquiring updating data and a Merck tree to be updated, and determining an original data set and a preset Hash coding model corresponding to the Merck tree to be updated;
training and updating the preset Hash coding model based on the updating data and the original data set to obtain a target Hash coding model;
and updating the random leaf node layer of the Mercker tree to be updated based on the target Hash coding model and the updating data to obtain the target Mercker tree.
Optionally, the update data comprises a newly added data block,
the step of updating the random leaf node layer of the Mercker tree to be updated based on the target Hash coding model and the update data to obtain the target Mercker tree comprises the following steps:
generating a target random leaf node corresponding to the newly added data block, and inputting the newly added data block into the target hash coding model to obtain an output hash coding value corresponding to the newly added data block;
matching a target father node corresponding to the target random leaf node based on the output hash code value;
and updating the random leaf node layer based on the target random leaf node and the target father node to obtain the target Merck tree.
Optionally, the step of training and updating the preset hash coding model based on the updated data and the raw data set to obtain the target hash coding model includes:
determining a target data set based on the updated data and the original data set, and determining a data category set corresponding to the target data set;
acquiring a target hash code value set corresponding to the data category set, and determining training data corresponding to the target data set and a target hash code value corresponding to the target hash code value set;
and performing iterative training on the preset Hash coding model based on the training data and the target Hash coding value to optimize a polarization loss function corresponding to the preset Hash coding model until the preset Hash coding model reaches a preset iteration ending condition, and obtaining the target Hash coding model.
Optionally, the iteratively training the preset hash coding model based on the training data and the target hash coding value to optimize a polarization loss function corresponding to the preset hash coding model until the preset hash coding model reaches a preset iteration end condition, and the obtaining the target hash coding model includes:
inputting the training data into the preset Hash coding model to carry out Hash coding on the training data based on the polarization loss function to obtain an initial Hash coding value;
calculating a training Hamming distance between the initial Hash code value and the target Hash code value, and comparing the training Hamming distance with a preset Hamming distance threshold value;
if the training Hamming distance is larger than the preset Hamming distance threshold value, judging that the preset Hash coding model does not reach the preset iteration ending condition, and optimizing the polarization loss function based on the initial Hash coding value;
retraining the preset Hash coding model based on the optimized polarization loss function until the training Hamming distance is smaller than or equal to the preset Hamming distance threshold value;
and if the training Hamming distance is smaller than or equal to the preset Hamming distance threshold value, judging that the preset Hash coding model reaches the preset iteration ending condition, and taking the preset Hash coding model as the target Hash coding model.
Optionally, the step of inputting the training data into the preset hash coding model to perform hash coding on the training data based on the polarization loss function, and obtaining an initial hash coding value includes:
inputting the training data into the preset Hash coding model, and carrying out Hash on the training data to obtain a training Hash result;
polarizing the training hash result based on the polarization loss function to obtain a polarization result;
determining the initial hash-code value based on the polarization result.
Optionally, the raw data set includes one or more raw data blocks and a target hash code value corresponding to each raw data block, the to-be-updated mercker tree includes one or more random leaf nodes, one or more intermediate nodes, and a root node,
before the step of obtaining the update data and the Merck tree to be updated and determining the original data set and the preset Hash coding model corresponding to the Merck tree to be updated, the Merck tree updating method comprises the following steps:
generating random leaf nodes corresponding to the original data blocks, wherein one original data block corresponds to one random leaf node;
and generating each intermediate node and the root node based on each target hash code value and the preset hash code model.
Optionally, each of the intermediate nodes includes one or more first-layer intermediate nodes and one or more upper-layer intermediate nodes;
the step of generating each intermediate node and the root node based on each target hash value and the preset hash coding model includes:
generating a first layer intermediate node corresponding to each target hash code value, wherein one target hash code value corresponds to one first layer intermediate node;
and circularly generating upper-layer intermediate nodes corresponding to the first-layer intermediate nodes based on the preset Hash coding model until the root nodes are obtained.
The present application further provides a merkel tree updating apparatus, which is a virtual apparatus and is applied to merkel tree updating equipment, and the merkel tree updating apparatus includes:
the determining module is used for acquiring the updating data and the Merck tree to be updated, and determining an original data set and a preset Hash coding model corresponding to the Merck tree to be updated;
the training module is used for training and updating the preset Hash coding model based on the updating data and the original data set to obtain a target Hash coding model;
and the updating module is used for updating the random leaf node layer of the Mercker tree to be updated based on the target Hash coding model and the updating data to obtain the target Mercker tree.
Optionally, the update module includes:
the hash coding submodule is used for generating a target random leaf node corresponding to the newly added data block, inputting the newly added data block into the target hash coding model and obtaining an output hash coding value corresponding to the newly added data block;
the matching submodule is used for matching a target father node corresponding to the target random leaf node based on the output hash code value;
and the updating submodule is used for updating the random leaf node layer based on the target random leaf node and the target father node to obtain the target Mercker tree.
Optionally, the training module comprises:
the extraction submodule is used for determining a target data set based on the updated data and the original data set and determining a data category set corresponding to the target data set;
and the iterative training submodule is used for performing iterative training on the preset Hash coding model based on the training data and the target Hash coding value so as to optimize a polarization loss function corresponding to the preset Hash coding model until the preset Hash coding model reaches a preset iteration ending condition, and obtaining the target Hash coding model.
Optionally, the iterative training submodule includes:
a hash coding unit, configured to input the training data into the preset hash coding model, so as to perform hash coding on the training data based on the polarization loss function, and obtain an initial hash coding value;
the comparison unit is used for calculating a training Hamming distance between the initial Hash code value and the target Hash code value and comparing the training Hamming distance with a preset Hamming distance threshold value;
a first determination unit, configured to determine that the preset hash coding model does not reach the preset iteration end condition if the training hamming distance is greater than the preset hamming distance threshold, and optimize the polarization loss function based on the initial hash coding value;
a retraining unit, configured to retrain the preset hash coding model based on the optimized polarization loss function until the training hamming distance is less than or equal to the preset hamming distance threshold;
and the second judging unit is used for judging that the preset Hash coding model reaches the preset iteration ending condition if the training Hamming distance is smaller than or equal to the preset Hamming distance threshold value, and taking the preset Hash coding model as the target Hash coding model.
Optionally, the hash encoding unit includes:
the Hash subunit is used for inputting the training data into the preset Hash coding model, and carrying out Hash on the training data to obtain a training Hash result;
the polarising subunit is used for polarising the training hash result based on the polarization loss function to obtain a polarising result;
a determining subunit, configured to determine the initial hash code value based on the polarization result.
Optionally, the merkel tree updating apparatus further includes:
a first generation module, configured to generate a random leaf node corresponding to each original data block, where one original data block corresponds to one random leaf node;
and the second generation module is used for generating each intermediate node and the root node based on each target hash code value and the preset hash code model.
Optionally, the second generating module includes:
the generation submodule is used for generating first-layer intermediate nodes corresponding to the target Hash code values, wherein one target Hash code value corresponds to one first-layer intermediate node;
and the cyclic generation submodule is used for cyclically generating upper-layer intermediate nodes corresponding to the first-layer intermediate nodes based on the preset Hash coding model until the root nodes are obtained.
The present application also provides a merkel tree updating device, where the merkel tree updating device is an entity device, and the merkel tree updating device includes: a memory, a processor and a program of the merkel tree updating method stored on the memory and executable on the processor, which program, when executed by the processor, may implement the steps of the merkel tree updating method as described above.
The present application also provides a readable storage medium having stored thereon a program for implementing the method for mercker tree updating, which when executed by a processor implements the steps of the method for mercker tree updating as described above.
The method comprises the steps of obtaining updating data and a Merck tree to be updated, determining an original data set and a preset Hash coding model corresponding to the Merck tree to be updated, training and updating the preset Hash coding model based on the updating data and the original data set to obtain a target Hash coding model, updating a random leaf node layer of the Merck tree to be updated based on the target Hash coding model and the updating data, and obtaining the target Merck tree. That is, the present application provides a to-be-updated tacher tree constructed based on a preset hash coding model, and further, when the to-be-updated tacher tree needs to be updated, update data and an original data set corresponding to the to-be-updated tacher tree are obtained, and further, based on the update data and the original data set, training and updating of the preset hash coding model are performed to obtain a target hash coding model, and further, based on the target hash coding model and the update data, a random leaf node layer of the to-be-updated tacher tree is updated to obtain the target tacher tree. That is, when updating the Merck tree to be updated, no matter how many data on the random leaf nodes are updated by the Merck tree, only one training update needs to be performed on the preset Hash coding model, and then based on the Hash coding model obtained by training and updating, the random leaf node layer of the Mercker tree to be updated is updated, the updating of the Mercker tree to be updated can be completed, and compared with the current updating method of the Mercker tree, when the random leaf node of the Mercker tree needs to be updated, the update of the Mercker tree to be updated can be completed only by updating the random leaf node layer of the Mercker tree, all the nodes of the tree branch where the random leaf node is located do not need to be updated, further, the calculation amount during the update of the Mercker tree is reduced, and further the calculation complexity during the update of the Mercker tree is reduced, so that the technical problem of low calculation efficiency during the update of the Mercker tree is solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic flow chart diagram of a first embodiment of the Mercker tree updating method of the present application;
FIG. 2 is a schematic diagram of a binary Merck tree in the Merck tree updating method of the present application;
FIG. 3 is a schematic flow chart diagram of a second embodiment of the Mercker tree updating method of the present application;
fig. 4 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present application.
The objectives, features, and advantages of the present application will be further described with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In a first embodiment of the mercker tree updating method of the present application, referring to fig. 1, the mercker tree updating method includes:
step S10, acquiring update data and a Merck tree to be updated, and determining an original data set and a preset Hash coding model corresponding to the Merck tree to be updated;
in this embodiment, it should be noted that the original data set includes one or more original data blocks, each of the original data blocks corresponds to one or more data categories, one of the data categories corresponds to a target hash code value, where the target hash code value is obtained by hash-coding an original data block corresponding to the corresponding data category based on a preset hash coding method, where the preset hash coding method includes random target hash coding, adaptive target hash coding, and the like, and it is noted that a hamming distance between any two original data blocks in each of the data categories is smaller than or equal to a preset first hamming distance threshold, for example, if the preset first hamming distance threshold is 1, the original data block a is 0101010111, and the original data block B is 0101010101, a hamming distance between the original data block a and the original data block B is 1, and judging that the original data block A and the original data block B belong to the same data category.
Additionally, it should be noted that the updated data is new added data added to the to-be-updated merkel tree, where the new added data includes one or more new added data blocks, and the to-be-updated merkel tree is a data structure storing the original data set, where the to-be-updated merkel tree includes a random leaf node layer, each intermediate node layer, and a root node, where the random leaf node layer includes one or more random leaf nodes, and the intermediate node layer includes one or more intermediate nodes, as shown in fig. 2, a binary merkel tree diagram is shown, where a single node at the top layer is the root node, nodes at the bottom layer 1, 2, 3, 4, 5, 6, 7, 8 are all random leaf nodes, and a node between the top layer and the bottom layer is each intermediate node, and additionally, it should be noted that, the random leaf node is a node for storing the original data block, one random leaf node corresponds to one original data block, the intermediate node and the root node are nodes for storing a target model output value of the preset hash coding model, wherein the target model output value is a hash coding value obtained based on the preset hash coding model and the model input value, in the to-be-updated merkel tree, the model input value corresponding to the model output value stored in each layer of node is the hash coding value stored in the previous layer, the model input value corresponding to the model output value stored in the root node is the hash coding value corresponding to the last intermediate node layer, wherein it needs to be stated that, for the preset hash coding model, the original data blocks belonging to the same data category are input into the preset hash coding model, the same output hash code value is to be output, the output hash code value is a target hash code value corresponding to the data category, and similarly, the model input values belonging to the same category, and the preset hash code value may all output the same model output value, for example, assuming that in the to-be-updated merkel tree, a node layer M includes 4 nodes (a, B, C, D), and a previous node layer N corresponding to the node layer M includes 2 nodes (E, F), where the node a and the node B point to a node E, and the node C and the node D point to a node F, and if the preset hash code model is H, H (a), (E), (B), (E), (C), (F), and H (D), (F) are satisfied.
Obtaining update data and a to-be-updated Merck tree, determining an original data set and a preset Hash code model corresponding to the to-be-updated Merck tree, specifically, extracting each newly-added data block and the to-be-updated Merck tree from a preset storage database, determining an original data block corresponding to each random leaf node in the to-be-updated Merck tree and a target Hash code value corresponding to each original data block, and obtaining a preset Hash code model used for constructing the to-be-updated Merck tree, wherein the to-be-updated Merck tree is constructed based on the preset Hash code model, each original data block and each target Hash code value, that is, firstly generating a random leaf node layer corresponding to each original data block, and generating a first intermediate node layer corresponding to each target Hash code value, wherein the first intermediate node layer is an adjacent previous node layer of the random leaf node layers, further, each target hash coding layer is used as the input of the preset hash coding model to obtain each corresponding model output value, and then a second intermediate node layer corresponding to each model output value is generated, wherein the second intermediate node layer is the adjacent previous node layer of the first intermediate node layer, and then each model output value is used as the input of the preset hash coding model, and the generation of each intermediate node layer is circularly performed until the root node is obtained, and then the construction of the to-be-updated merck tree is completed.
Step S20, based on the updated data and the original data set, training and updating the preset Hash coding model to obtain a target Hash coding model;
in this embodiment, based on the update data and the original data set, the preset hash coding model is trained and updated to obtain a target hash coding model, specifically, each new data block is added to the original data set, a corresponding target hash coding value is matched for each new data block in each target hash coding to obtain the target data set, training data and a target hash coding value set corresponding to the training data are selected from the target data set, where the training data includes one or more training data blocks in the target data set, the target hash coding value set includes a target hash coding value corresponding to each training data block, and then the preset hash coding model is iteratively trained and updated based on the target hash coding value set and the training data, and obtaining the target Hash coding model until a preset iteration ending condition is met.
The step of training and updating the preset hash coding model based on the updated data and the original data set to obtain a target hash coding model comprises the following steps:
step S21, determining a target data set based on the updated data and the original data set, and determining a data category set corresponding to the target data set;
in this embodiment, it should be noted that the target data set includes one or more training data blocks, and the data category set includes data categories corresponding to the training data blocks.
Step S22, acquiring a target Hash code value set corresponding to the data category set, and determining training data corresponding to the target data set and a target Hash code value corresponding to the target Hash code value set;
in this embodiment, it should be noted that the target hash code value set includes target hash code values corresponding to the data categories.
Acquiring a target hash code value set corresponding to the data category set, and determining training data corresponding to the target data set and a target hash code value corresponding to the target hash code value set, specifically, acquiring a target hash code value corresponding to each data category, extracting the training data from the target data set, and extracting a target hash code value corresponding to the training data from the target hash code set.
Preferably, a preset number of data blocks are extracted from each data category respectively to serve as training data blocks, each training data block is used as the training data, and a target hash code value corresponding to the data category of each training data block is determined in the target hash code value set.
Step S23, performing iterative training on the preset hash coding model based on the training data and the target hash coding value to optimize a polarization loss function corresponding to the preset hash coding model until the preset hash coding model reaches a preset iteration end condition, and obtaining the target hash coding model.
In this embodiment, it should be noted that the preset iteration ending condition includes reaching a preset iteration number threshold, converging a polarization loss function, and the like, and the iterative training includes one or more rounds of training.
Performing iterative training on the preset hash coding model based on the training data and the target hash coding value to optimize a polarization loss function corresponding to the preset hash coding model until the preset hash coding model reaches a preset iteration end condition, obtaining the target hash coding model, specifically, extracting a training data block from the training data to input into the preset hash coding model, performing training and updating on the preset hash coding model based on the target hash coding value corresponding to the training data block, and judging whether the preset hash coding model after the training and updating meets the preset iteration end condition, if the preset hash coding model after the training and updating meets the preset iteration end condition, taking the preset hash coding model after the training and updating as the target hash coding model, if the preset Hash code model after the training update does not meet the preset iteration ending condition, acquiring an initial Hash code value of the training of the current round, optimizing the polarization loss function based on the initial Hash code value and the target Hash code result, and training and updating the optimized preset Hash code model again until the preset Hash code model after the training and updating meets the preset iteration ending condition.
The step of performing iterative training on the preset hash coding model based on the training data and the target hash coding value to optimize a polarization loss function corresponding to the preset hash coding model until the preset hash coding model reaches a preset iteration ending condition includes:
step S221, inputting the training data into the preset Hash coding model, and carrying out Hash coding on the training data based on the polarization loss function to obtain an initial Hash coding value;
in this embodiment, the training data is input to the preset hash coding model, so as to perform hash coding on the training data based on the polarization loss function, to obtain an initial hash coding value, specifically, a training data block is extracted from the training data, and a to-be-processed training matrix corresponding to the training data block is input to the preset hash coding model, where the to-be-processed training matrix is a matrix representation form of the training data block and is used to store data in the to-be-processed training matrix, so as to perform hash on the to-be-processed training matrix, to obtain a hash vector, and further perform forced polarization on each bit of the hash vector based on the polarization loss function, to obtain a polarization vector corresponding to the hash vector, and further generate the initial hash coding value based on a polarization identifier corresponding to each bit in the polarization vector, wherein the polarization loss function is as follows,
L(v,t^c)=max(m-v*t^c,0)
wherein L is the polarization loss function, m is a preset forced polarization parameter, v is a value at each hash vector bit in the hash vector, and the absolute value of v is greater than m, t ^ c is a target hash value corresponding to the hash vector bit, the target hash value is a bit value at a target hash code value corresponding to the training data block, and t ^ c { -1, +1}, and the polarization loss function converges to 0, for example, assuming that m is 1, t ^ c is 1, v is-1, at this time, L ^ 2, if the polarization loss function converges to 0, it is necessary to force polarization on v so that v is 1, at this time, L ^ 0, and further when t ^ c is equal to 1, the value at the hash vector bit will gradually move away from 0 in the positive direction, and when t ^ c is equal to-1, the numerical values on the bits of the hash vector are gradually far away from 0 in the negative direction, and then after the polarization is successful, the polarization identifier of each bit in the obtained polarization vector is consistent with the corresponding target hash value, further, because the target hash code values of the same data category are the same, the polarization identifiers on each bit in the polarization vector corresponding to each training data block belonging to the same data category are consistent, and further, based on each polarization identifier, the obtained model output values are consistent, that is, for the model input data belonging to the same data category, the trained preset hash code model can output the same model output value.
Additionally, it should be noted that each bit in the hash vector corresponds to a polarization output channel of the preset hash code, and the preset forced polarization parameter corresponding to each polarization output channel is obtained by training the preset hash code model, and further the preset forced polarization parameter corresponding to each polarization output channel may be the same or different, where the polarization output channel is configured to force-polarize a value on each bit in the hash vector through the corresponding polarization loss function based on the preset forced polarization parameter, and output a coded value of the corresponding bit in the initial hash code value.
Wherein the step of inputting the training data into the preset hash coding model to perform hash coding on the training data based on the polarization loss function to obtain an initial hash coding value includes:
step A10, inputting the training data into the preset Hash coding model, and carrying out Hash on the training data to obtain a training Hash result;
in this embodiment, it should be noted that the preset hash coding model includes a hidden layer and a hash layer, where the hidden layer is one or more layers of neural networks for performing data processing, and the data processing includes convolution. Pooling and the like, wherein the hash layer is one or more layers of neural networks for hashing.
Inputting the training data into the preset Hash coding model, carrying out Hash on the training data to obtain a training Hash result, specifically, extracting training data blocks in the training data, inputting a to-be-processed training matrix corresponding to the training data blocks into the hidden layer, carrying out convolution and pooling processing on the to-be-processed training matrix for preset times to obtain a feature representation matrix, further inputting the feature representation matrix into the Hash layer, carrying out full connection on the feature representation matrix to obtain a Hash vector, and taking the Hash vector as the training Hash result.
Step A20, polarizing the training hash result based on the polarization loss function to obtain a polarized result;
in this embodiment, the training hash result is polarized based on the polarization loss function to obtain a polarization result, specifically, each bit in the hash vector is polarized based on the polarization loss function to obtain a polarization vector, and the polarization vector is used as the polarization result, for example, assuming that the hash vector is (-1, -8), the polarization vector is obtained as (1, -8).
Step a30, determining the initial hash code value based on the polarization result.
In this embodiment, the initial hash code value is determined based on the polarization result, specifically, a polarization identifier corresponding to each bit in the polarization result is extracted, where the polarization identifier is a positive or negative sign of the bit, and then the initial hash code value is determined based on each polarization identifier, for example, if the polarization result is (1, -8, -7, 0.9), the initial hash code value is 1001.
Step S222, calculating a training Hamming distance between the initial Hash code value and the target Hash code value, and comparing the training Hamming distance with a preset Hamming distance threshold value;
in this embodiment, a training hamming distance between the initial hash code value and the target hash code result is calculated, and the training hamming distance is compared with a preset hamming distance threshold, specifically, a value on each bit of the initial hash code value is compared with a value on each bit of the target hash code result, a bit number that is different between the initial hash code value and the target hash code result is determined, the bit number is used as the training hamming distance, and the training hamming distance is compared with the preset hamming distance threshold, for example, if the initial hash code value is a vector (1, 1, 1, 1), and the target hash code result is a vector (-1, 1, 1, -1), the bit number is 2, the training hamming distance is 2.
Step S223, if the training hamming distance is greater than the preset hamming distance threshold, determining that the preset hash coding model does not reach the preset iteration end condition, and optimizing the polarization loss function based on the initial hash coding value;
in this embodiment, if the training hamming distance is greater than the preset hamming distance threshold, it is determined that the preset hash coding model does not reach the preset iteration end condition, and the polarization loss function is optimized based on the initial hash coding value, specifically, if the training hamming distance is greater than the preset hamming distance threshold, it is determined that the polarization loss function does not converge on all bits of the training matrix to be hashed, that is, the polarization loss function does not converge, and it is further determined that the preset hash coding model does not reach the preset iteration end condition, and further one or more different bits between the initial hash coding value and the target hash coding result are determined, and the non-converged polarization output channels corresponding to the different bits are determined, and the preset forced polarization parameters in the polarization loss functions corresponding to the non-converged polarization output channels are adjusted, the non-convergence polarization output channel is a polarization output channel corresponding to a non-convergence polarization loss function, wherein the preset hash coding model includes one or more polarization output channels, and the number of the polarization output channels is related to the number of bits of the training matrix to be hashed, that is, one bit corresponds to one polarization output channel.
Step S224, retraining the preset Hash coding model based on the optimized polarization loss function until the training Hamming distance is smaller than or equal to the preset Hamming distance threshold value;
in this embodiment, based on the optimized polarization loss function, the training of the preset hash coding model is performed again until the training hamming distance is less than or equal to the preset hamming distance threshold, specifically, the to-be-hashed training matrix corresponding to the training data is obtained again, and based on the to-be-hashed training matrix obtained again, the iterative training is performed again on the optimized preset hash coding model corresponding to the polarization loss function, so as to continuously optimize the polarization loss function until the training hamming distance is less than or equal to the preset hamming distance threshold.
Step S225, if the training hamming distance is less than or equal to the preset hamming distance threshold, determining that the preset hash coding model reaches the preset iteration end condition, and using the preset hash coding model as the target hash coding model.
In this embodiment, if the training hamming distance is less than or equal to the preset hamming distance threshold, it is determined that the preset hash coding model reaches the preset iteration end condition, and the preset hash coding model is used as the target hash coding model, specifically, if the training hamming distance is less than or equal to the preset hamming distance threshold, it is determined that the preset hash coding model reaches the preset iteration end condition, that is, a polarization loss function corresponding to each polarization output channel in the preset hash coding model converges, and the preset hash coding model is used as the target hash coding model.
Step S30, based on the target hash coding model and the update data, updating the random leaf node layer of the mercker tree to be updated, and obtaining the target mercker tree.
In this embodiment, based on the target hash coding model and the update data, a random leaf node layer of the to-be-updated tacle tree is updated to obtain the target tacle tree, specifically, new random leaf nodes corresponding to the new data blocks are generated, the new data blocks are input into the target hash coding model, hash coding values are output, further, based on the output values of the new models, target parent nodes of the new data blocks are determined, and the new random leaf nodes are connected to the corresponding target parent nodes to obtain the target tacle tree.
Further, the model parameters of the target hash coding model are sent to each merck tree user, and each merck tree user can construct each target merck tree based on the model parameters.
Wherein the update data comprises a newly added data block,
the step of updating the random leaf node layer of the Mercker tree to be updated based on the target Hash coding model and the update data to obtain the target Mercker tree comprises the following steps:
step S31, generating a target random leaf node corresponding to the newly added data block, and inputting the newly added data block into the target hash coding model to obtain an output hash coding value corresponding to the newly added data block;
in this embodiment, it should be noted that the target random leaf node is a mercker tree node that stores the new added data block, and the update data at least includes one new added data block.
Generating target random leaf nodes corresponding to the newly added data blocks, inputting the newly added data blocks into the target hash coding model, and obtaining output hash coding values corresponding to the newly added data blocks, specifically, generating target random leaf nodes corresponding to each newly added data block, inputting each newly added data block into the target hash coding model, and performing hash coding on each newly added data block respectively, so as to obtain output hash coding values corresponding to each newly added data block, wherein newly added data blocks belonging to the same data category correspond to the same output hash coding value.
Step S32, matching a target father node corresponding to the target random leaf node based on the output hash code value;
in this embodiment, it should be noted that one target parent node corresponds to one target hash code value.
And matching target father nodes corresponding to the target random leaf nodes based on the output hash code values, specifically, comparing each output hash code value with each target hash code value, determining a selected target hash code value which is the same as each output hash code value in each target hash code value, and taking the tachr tree node corresponding to each selected target hash code value as the target father node corresponding to each target random leaf node.
Step S33, updating the random leaf node layer based on the target random leaf node and the target parent node, and obtaining the target merck tree.
In this embodiment, the random leaf node layer is updated based on the target random leaf node and the target parent node to obtain the target tachr tree, and specifically, each target random leaf node is connected to a corresponding target parent node respectively to update the random leaf node layer to obtain the target tachr tree.
In this embodiment, the target tachr code model is obtained by obtaining update data and the tachr tree to be updated, determining an original data set and a preset hash code model corresponding to the tachr tree to be updated, further determining a target data set based on the update data and the original data set, further training and updating the preset hash code model based on the target data set, and further updating a random leaf node layer of the tachr tree to be updated based on the target hash code model and the update data, thereby obtaining the target tachr tree. That is, the embodiment provides a to-be-updated merck tree constructed based on a preset hash coding model, and further when the to-be-updated merck tree needs to be updated, update data and an original data set corresponding to the to-be-updated merck tree are obtained, and further training and updating are performed on the preset hash coding model based on the update data and the original data set, so as to obtain a target hash coding model, and further, based on the target hash coding model and the update data, a random leaf node layer of the to-be-updated merck tree is updated, so as to obtain the target merck tree. That is, in the embodiment, when updating the merck tree to be updated, no matter how many data on the random leaf nodes are updated by the merck tree, only one training update needs to be performed on the preset hash coding model, and then based on the Hash coding model obtained by training and updating, the random leaf node layer of the Mercker tree to be updated is updated, the updating of the Mercker tree to be updated can be completed, and compared with the current updating method of the Mercker tree, when the random leaf node of the Mercker tree needs to be updated, the update of the Mercker tree to be updated can be completed only by updating the random leaf node layer of the Mercker tree, all the nodes of the tree branch where the random leaf node is located do not need to be updated, further, the calculation amount during the update of the Mercker tree is reduced, and further the calculation complexity during the update of the Mercker tree is reduced, so that the technical problem of low calculation efficiency during the update of the Mercker tree is solved.
Further, referring to fig. 3, based on the first embodiment in the present application, in another embodiment of the present application, the original data set includes one or more original data blocks and a target hash code value corresponding to each of the original data blocks, the to-be-updated merkel tree includes one or more random leaf nodes, one or more intermediate nodes, and a root node,
before the step of obtaining the update data and the Merck tree to be updated and determining the original data set and the preset Hash coding model corresponding to the Merck tree to be updated, the Merck tree updating method comprises the following steps:
step B10, generating random leaf nodes corresponding to the original data blocks, wherein one original data block corresponds to one random leaf node;
in this embodiment, it should be noted that the random leaf node is a mercker tree node that stores the original data block.
Step B20, generating each intermediate node and the root node based on each target hash code value and the preset hash code model.
In this embodiment, each of the intermediate nodes includes one or more first-tier intermediate nodes and one or more second-tier intermediate nodes.
Generating each intermediate node and the root node based on each target hash code value and the preset hash code model, specifically, generating a first layer of intermediate nodes corresponding to each target hash code value, wherein the first layer of intermediate nodes is a tachr tree node for storing the target hash code value, and one target hash code value corresponds to one first layer of intermediate nodes, and then inputting each target hash code value into the preset hash code model, and performing hash coding on each target hash code value respectively to obtain a second layer of hash code value corresponding to each target hash code value, wherein one second layer of hash code value belongs to one or more target hash code values of the same data category, and then generating a second layer of intermediate nodes corresponding to each second layer of hash code value, and then using the second layer of hash code value as the input of the preset hash code model, and continuing to generate intermediate nodes of other layers until the preset hash coding model outputs a single hash coding value, and generating a root node corresponding to the single hash coding value so as to complete the construction of the Mercker tree to be updated.
Wherein each of the intermediate nodes comprises one or more first-tier intermediate nodes and one or more upper-tier intermediate nodes;
the step of generating each intermediate node and the root node based on each target hash value and the preset hash coding model includes:
step B21, generating a first-layer intermediate node corresponding to each target hash code value, where one target hash code value corresponds to one first-layer intermediate node;
in this embodiment, it should be noted that the first-layer intermediate node is a merkel tree node that stores the target hash code value.
And B22, circularly generating upper-layer intermediate nodes corresponding to the first-layer intermediate nodes based on the preset Hash coding model until the root nodes are obtained.
In this embodiment, based on the preset hash coding model, upper-layer intermediate nodes corresponding to each first-layer intermediate node are cyclically generated until the root node is obtained, specifically, a target hash coding value corresponding to each first-layer intermediate node is input into the preset hash coding model, a second-layer hash coding value corresponding to each target hash coding value is output, and then a second-layer intermediate node corresponding to each second-layer hash coding value is generated, further, based on the preset hash coding model, intermediate nodes of other layers are continuously generated until the preset hash coding model outputs a single hash coding value, and a root node corresponding to the single hash coding value is generated, so as to complete the construction of the to-be-updated tacle tree.
In this embodiment, first, a random leaf node corresponding to each original data block is generated, where one original data block corresponds to one random leaf node, and then each intermediate node and the root node are generated based on each target hash code value and the preset hash code model. That is, the present embodiment provides a method for constructing a merck tree based on a preset hash coding model, that is, first generating random leaf nodes corresponding to each original data block, and further generating intermediate nodes corresponding to each target hash coding value, and further, based on the preset hash coding model and each target hash coding value, generating remaining intermediate nodes and root nodes, and further completing the construction of the merck tree to be updated, so that, because the present embodiment can directly generate hash coding values corresponding to input data blocks based on the preset hash coding model, it is not necessary to perform complex hash transformation on each input data block, and further, the construction efficiency of the merck tree is improved, further, when updating the merck tree to be updated, it is only necessary to perform one training update on the preset hash coding model of the merck tree to be updated, the updating of the Merck tree to be updated can be completed, the situation that all nodes of a tree branch where random leaf nodes are located need to be updated when the Merck tree to be updated is avoided, the calculated amount during the Merck tree updating is further reduced, and further, the calculation complexity during the Merck tree updating is reduced, so that a foundation is laid for solving the technical problem of low calculation efficiency during the Merck tree updating.
Referring to fig. 4, fig. 4 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present application.
As shown in fig. 4, the mercker tree updating apparatus may include: a processor 1001, such as a CPU, a memory 1005, and a communication bus 1002. The communication bus 1002 is used for realizing connection communication between the processor 1001 and the memory 1005. The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a memory device separate from the processor 1001 described above.
Optionally, the mercker tree updating apparatus may further include a rectangular user interface, a network interface, a camera, an RF (Radio Frequency) circuit, a sensor, an audio circuit, a WiFi module, and the like. The rectangular user interface may comprise a Display screen (Display), an input sub-module such as a Keyboard (Keyboard), and the optional rectangular user interface may also comprise a standard wired interface, a wireless interface. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface).
Those skilled in the art will appreciate that the configuration of the merkel tree updating apparatus shown in fig. 4 does not constitute a limitation of the merkel tree updating apparatus and may include more or less components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 4, the memory 1005, which is a type of computer storage medium, may include an operating system, a network communication module, and a mercker tree update program. The operating system is a program that manages and controls the mercker tree updating device hardware and software resources, supporting the running of the mercker tree updating program as well as other software and/or programs. The network communication module is used to enable communication between the various components within the memory 1005, as well as with other hardware and software in the Mercker tree update system.
In the mercker tree updating apparatus shown in fig. 4, the processor 1001 is configured to execute the mercker tree updating program stored in the memory 1005, and implement the steps of the mercker tree updating method described in any one of the above.
The specific implementation of the merkel tree updating apparatus in the present application is substantially the same as the embodiments of the merkel tree updating method described above, and will not be described herein again.
The embodiment of the present application further provides a merkel tree updating apparatus, where the merkel tree updating apparatus is applied to merkel tree updating equipment, and the merkel tree updating apparatus includes:
the determining module is used for acquiring the updating data and the Merck tree to be updated, and determining an original data set and a preset Hash coding model corresponding to the Merck tree to be updated;
the training module is used for training and updating the preset Hash coding model based on the updating data and the original data set to obtain a target Hash coding model;
and the updating module is used for updating the random leaf node layer of the Mercker tree to be updated based on the target Hash coding model and the updating data to obtain the target Mercker tree.
Optionally, the update module includes:
the hash coding submodule is used for generating a target random leaf node corresponding to the newly added data block, inputting the newly added data block into the target hash coding model and obtaining an output hash coding value corresponding to the newly added data block;
the matching submodule is used for matching a target father node corresponding to the target random leaf node based on the output hash code value;
and the updating submodule is used for updating the random leaf node layer based on the target random leaf node and the target father node to obtain the target Mercker tree.
Optionally, the training module comprises:
the extraction submodule is used for determining a target data set based on the updated data and the original data set and determining a data category set corresponding to the target data set;
and the iterative training submodule is used for performing iterative training on the preset Hash coding model based on the training data and the target Hash coding value so as to optimize a polarization loss function corresponding to the preset Hash coding model until the preset Hash coding model reaches a preset iteration ending condition, and obtaining the target Hash coding model.
Optionally, the iterative training submodule includes:
a hash coding unit, configured to input the training data into the preset hash coding model, so as to perform hash coding on the training data based on the polarization loss function, and obtain an initial hash coding value;
the comparison unit is used for calculating a training Hamming distance between the initial Hash code value and the target Hash code value and comparing the training Hamming distance with a preset Hamming distance threshold value;
a first determination unit, configured to determine that the preset hash coding model does not reach the preset iteration end condition if the training hamming distance is greater than the preset hamming distance threshold, and optimize the polarization loss function based on the initial hash coding value;
a retraining unit, configured to retrain the preset hash coding model based on the optimized polarization loss function until the training hamming distance is less than or equal to the preset hamming distance threshold;
and the second judging unit is used for judging that the preset Hash coding model reaches the preset iteration ending condition if the training Hamming distance is smaller than or equal to the preset Hamming distance threshold value, and taking the preset Hash coding model as the target Hash coding model.
Optionally, the hash encoding unit includes:
the Hash subunit is used for inputting the training data into the preset Hash coding model, and carrying out Hash on the training data to obtain a training Hash result;
the polarising subunit is used for polarising the training hash result based on the polarization loss function to obtain a polarising result;
a determining subunit, configured to determine the initial hash code value based on the polarization result.
Optionally, the merkel tree updating apparatus further includes:
a first generation module, configured to generate a random leaf node corresponding to each original data block, where one original data block corresponds to one random leaf node;
and the second generation module is used for generating each intermediate node and the root node based on each target hash code value and the preset hash code model.
Optionally, the second generating module includes:
the generation submodule is used for generating first-layer intermediate nodes corresponding to the target Hash code values, wherein one target Hash code value corresponds to one first-layer intermediate node;
and the cyclic generation submodule is used for cyclically generating upper-layer intermediate nodes corresponding to the first-layer intermediate nodes based on the preset Hash coding model until the root nodes are obtained.
The embodiments of the merkel tree updating apparatus of the present application are substantially the same as the embodiments of the merkel tree updating method described above, and are not described herein again.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (10)

1. A merkel tree updating method, comprising:
acquiring updating data and a Mercker tree to be updated, and determining an original data set and a preset Hash coding model corresponding to the Mercker tree to be updated;
training and updating the preset Hash coding model based on the updating data and the original data set to obtain a target Hash coding model;
updating a random leaf node layer of the Merck tree to be updated based on the target Hash coding model and the updating data to obtain a target Merck tree;
the step of training and updating the preset hash coding model based on the updated data and the original data set to obtain a target hash coding model comprises the following steps:
determining a target data set based on the updated data and the original data set, and determining a data category set corresponding to the target data set, wherein the target data set comprises one or more training data blocks, and the data category set comprises data categories corresponding to the training data blocks;
acquiring a target hash code value set corresponding to the data category set, and determining training data corresponding to the target data set and target hash code values corresponding to the target hash code value set, wherein the target hash code value set comprises target hash code values corresponding to the data categories;
and performing iterative training and updating on a preset Hash coding model based on the training data and the target Hash coding value until a preset iteration ending condition is met, and obtaining the target Hash coding model.
2. The Mercker tree updating method according to claim 1, wherein the update data includes an added data block,
the step of updating the random leaf node layer of the Mercker tree to be updated based on the target Hash coding model and the update data to obtain the target Mercker tree comprises the following steps:
generating a target random leaf node corresponding to the newly added data block, and inputting the newly added data block into the target hash coding model to obtain an output hash coding value corresponding to the newly added data block;
matching a target father node corresponding to the target random leaf node based on the output hash code value;
and updating the random leaf node layer based on the target random leaf node and the target father node to obtain the target Merck tree.
3. The merkel tree updating method according to claim 1, wherein the step of training and updating the preset hash coding model based on the updated data and the original data set to obtain the target hash coding model comprises:
determining a target data set based on the updated data and the original data set, and determining a data category set corresponding to the target data set;
acquiring a target hash code value set corresponding to the data category set, and determining training data corresponding to the target data set and a target hash code value corresponding to the target hash code value set;
and performing iterative training on the preset Hash coding model based on the training data and the target Hash coding value to optimize a polarization loss function corresponding to the preset Hash coding model until the preset Hash coding model reaches a preset iteration ending condition, and obtaining the target Hash coding model.
4. The merkel tree updating method according to claim 3, wherein the iteratively training the preset hash coding model based on the training data and the target hash coding value to optimize a polarization loss function corresponding to the preset hash coding model until the preset hash coding model reaches a preset iteration ending condition, and the obtaining the target hash coding model comprises:
inputting the training data into the preset Hash coding model to carry out Hash coding on the training data based on the polarization loss function to obtain an initial Hash coding value;
calculating a training Hamming distance between the initial Hash code value and the target Hash code value, and comparing the training Hamming distance with a preset Hamming distance threshold value;
if the training Hamming distance is larger than the preset Hamming distance threshold value, judging that the preset Hash coding model does not reach the preset iteration ending condition, and optimizing the polarization loss function based on the initial Hash coding value;
retraining the preset Hash coding model based on the optimized polarization loss function until the training Hamming distance is smaller than or equal to the preset Hamming distance threshold value;
and if the training Hamming distance is smaller than or equal to the preset Hamming distance threshold value, judging that the preset Hash coding model reaches the preset iteration ending condition, and taking the preset Hash coding model as the target Hash coding model.
5. The merkel tree updating method of claim 4, wherein the step of inputting the training data into the preset hash coding model to hash-code the training data based on the polarization loss function to obtain an initial hash code value comprises:
inputting the training data into the preset Hash coding model, and carrying out Hash on the training data to obtain a training Hash result;
polarizing the training hash result based on the polarization loss function to obtain a polarization result;
determining the initial hash-code value based on the polarization result.
6. The Mercker tree updating method according to claim 1, wherein the original data set includes one or more original data blocks and a target hash code value corresponding to each of the original data blocks, the Mercker tree to be updated includes one or more random leaf nodes, one or more intermediate nodes, and a root node,
before the step of obtaining the update data and the to-be-updated merkel tree, and determining the original data set and the preset hash coding model corresponding to the to-be-updated merkel tree, the merkel tree updating method further includes:
generating random leaf nodes corresponding to the original data blocks, wherein one original data block corresponds to one random leaf node;
and generating each intermediate node and the root node based on each target hash code value and the preset hash code model.
7. The method for updating a mercker tree according to claim 6, wherein each of said intermediate nodes comprises one or more first-level intermediate nodes and one or more upper-level intermediate nodes;
the step of generating each intermediate node and the root node based on each target hash code value and the preset hash code model includes:
generating a first layer intermediate node corresponding to each target hash code value, wherein one target hash code value corresponds to one first layer intermediate node;
and circularly generating upper-layer intermediate nodes corresponding to the first-layer intermediate nodes based on the preset Hash coding model until the root nodes are obtained.
8. A merkel tree updating apparatus, comprising:
the determining module is used for acquiring updating data and the Mercker tree to be updated, and determining an original data set and a preset Hash coding model corresponding to the Mercker tree to be updated;
the training module is used for training and updating the preset Hash coding model based on the updating data and the original data set to obtain a target Hash coding model;
the updating module is used for updating the random leaf node layer of the Mercker tree to be updated based on the target Hash coding model and the updating data to obtain a target Mercker tree;
wherein the training module is further configured to:
determining a target data set based on the updated data and the original data set, and determining a data category set corresponding to the target data set, wherein the target data set comprises one or more training data blocks, and the data category set comprises data categories corresponding to the training data blocks;
acquiring a target hash code value set corresponding to the data category set, and determining training data corresponding to the target data set and target hash code values corresponding to the target hash code value set, wherein the target hash code value set comprises target hash code values corresponding to the data categories;
and performing iterative training and updating on a preset Hash coding model based on the training data and the target Hash coding value until a preset iteration ending condition is met, and obtaining the target Hash coding model.
9. A merkel tree updating apparatus, comprising: a memory, a processor, and a program stored on the memory for implementing the merkel tree updating method,
the memory is used for storing a program for realizing the Mercker tree updating method;
the processor is configured to execute a program for implementing the method for updating a mercker tree, so as to implement the steps of the method for updating a mercker tree according to any one of claims 1 to 7.
10. A readable storage medium, characterized in that the readable storage medium has stored thereon a program for implementing a merkel tree updating method, which is executed by a processor to implement the steps of the merkel tree updating method according to any one of claims 1 to 7.
CN202010453608.5A 2020-05-22 2020-05-22 Mercker tree updating method, device, equipment and readable storage medium Active CN111625258B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010453608.5A CN111625258B (en) 2020-05-22 2020-05-22 Mercker tree updating method, device, equipment and readable storage medium
PCT/CN2021/093397 WO2021233182A1 (en) 2020-05-22 2021-05-12 Merkle tree updating method, apparatus and device, and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010453608.5A CN111625258B (en) 2020-05-22 2020-05-22 Mercker tree updating method, device, equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN111625258A CN111625258A (en) 2020-09-04
CN111625258B true CN111625258B (en) 2021-08-27

Family

ID=72260731

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010453608.5A Active CN111625258B (en) 2020-05-22 2020-05-22 Mercker tree updating method, device, equipment and readable storage medium

Country Status (2)

Country Link
CN (1) CN111625258B (en)
WO (1) WO2021233182A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111625258B (en) * 2020-05-22 2021-08-27 深圳前海微众银行股份有限公司 Mercker tree updating method, device, equipment and readable storage medium
CN114281793A (en) * 2020-09-28 2022-04-05 华为技术有限公司 Data verification method, device and system
CN112380209B (en) * 2020-10-29 2021-08-20 华东师范大学 Block chain multi-channel state data-oriented structure tree aggregation method
CN112559518A (en) * 2020-12-10 2021-03-26 杭州趣链科技有限公司 Merck tree updating method, terminal device and storage medium
CN113377979B (en) * 2021-06-09 2023-09-19 中国国家铁路集团有限公司 Merck tree-based train running scheme comparison generation optimization method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105868369A (en) * 2016-03-30 2016-08-17 电子科技大学 Data model verification system and method based on Merkle tree structure
CN106897368A (en) * 2017-01-16 2017-06-27 西安电子科技大学 Database update operating method is set and its be can verify that in the summation of Merkle Hash
WO2018109260A1 (en) * 2016-12-16 2018-06-21 Nokia Technologies Oy Secure document management
CN109829549A (en) * 2019-01-30 2019-05-31 宁波大学 Hash learning method and its unsupervised online Hash learning method based on the tree that develops
CN110033264A (en) * 2019-01-31 2019-07-19 阿里巴巴集团控股有限公司 Construct the corresponding Mei Keer tree of block, simple payment verification method and device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103218574A (en) * 2013-04-09 2013-07-24 电子科技大学 Hash tree-based data dynamic operation verifiability method
CN105512273A (en) * 2015-12-03 2016-04-20 中山大学 Image retrieval method based on variable-length depth hash learning
CN106845280A (en) * 2017-03-14 2017-06-13 广东工业大学 A kind of Merkle Hash trees cloud data integrity auditing method and system
CN110134803B (en) * 2019-05-17 2020-12-11 哈尔滨工程大学 Image data quick retrieval method based on Hash learning
CN110704664B (en) * 2019-08-28 2022-04-05 宁波大学 Hash retrieval method
CN110688501B (en) * 2019-08-28 2022-04-05 宁波大学 Hash retrieval method of full convolution network based on deep learning
CN111625258B (en) * 2020-05-22 2021-08-27 深圳前海微众银行股份有限公司 Mercker tree updating method, device, equipment and readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105868369A (en) * 2016-03-30 2016-08-17 电子科技大学 Data model verification system and method based on Merkle tree structure
WO2018109260A1 (en) * 2016-12-16 2018-06-21 Nokia Technologies Oy Secure document management
CN106897368A (en) * 2017-01-16 2017-06-27 西安电子科技大学 Database update operating method is set and its be can verify that in the summation of Merkle Hash
CN109829549A (en) * 2019-01-30 2019-05-31 宁波大学 Hash learning method and its unsupervised online Hash learning method based on the tree that develops
CN110033264A (en) * 2019-01-31 2019-07-19 阿里巴巴集团控股有限公司 Construct the corresponding Mei Keer tree of block, simple payment verification method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Merkle Tree学习;风之舞555;《博客园:https://www.cnblogs.com/fengzhiwu/p/5524324.html》;20160527;第1-17页 *

Also Published As

Publication number Publication date
CN111625258A (en) 2020-09-04
WO2021233182A1 (en) 2021-11-25

Similar Documents

Publication Publication Date Title
CN111625258B (en) Mercker tree updating method, device, equipment and readable storage medium
CN111967609B (en) Model parameter verification method, device and readable storage medium
CN110119474B (en) Recommendation model training method, prediction method and device based on recommendation model
US8538938B2 (en) Interactive proof to validate outsourced data stream processing
CN111626408B (en) Hash coding method, device and equipment and readable storage medium
Ghosh et al. Communication-efficient and byzantine-robust distributed learning with error feedback
CN111158912A (en) Task unloading decision method based on deep learning in cloud and mist collaborative computing environment
WO2021232747A1 (en) Data right determination method and device, and readable storage medium
WO2021233183A1 (en) Neural network verification method, apparatus and device, and readable storage medium
CN113011529A (en) Training method, device and equipment of text classification model and readable storage medium
CN106227881A (en) A kind of information processing method and server
CN114239237A (en) Power distribution network simulation scene generation system and method supporting digital twinning
JP2023123636A (en) Hyper parameter tuning method, device and program
CN116957112A (en) Training method, device, equipment and storage medium of joint model
WO2022148087A1 (en) Method and apparatus for training programming language translation model, device, and storage medium
CN108717444A (en) A kind of big data clustering method and device based on distributed frame
CN114298319A (en) Method and device for determining joint learning contribution value, electronic equipment and storage medium
CN115114442A (en) Knowledge graph updating method and device, storage medium and electronic equipment
CN111033532B (en) Training method and system for generating countermeasure network, electronic device and storage medium
CN113572679B (en) Account intimacy generation method and device, electronic equipment and storage medium
CN113824546B (en) Method and device for generating information
Nguyen et al. Optimize Coding and Node Selection for Coded Distributed Computing over Wireless Edge Networks
CN111445030A (en) Federal modeling method, device and readable storage medium based on stepwise regression method
CN114818955A (en) Information matching method and device, computer equipment and storage medium
CN116881122A (en) Test case generation method, device, equipment, storage medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant