CN112948875A - Model storage method, model using method, model storing device and electronic equipment - Google Patents
Model storage method, model using method, model storing device and electronic equipment Download PDFInfo
- Publication number
- CN112948875A CN112948875A CN202110213323.9A CN202110213323A CN112948875A CN 112948875 A CN112948875 A CN 112948875A CN 202110213323 A CN202110213323 A CN 202110213323A CN 112948875 A CN112948875 A CN 112948875A
- Authority
- CN
- China
- Prior art keywords
- model
- transformation
- chain
- node
- nodes
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 157
- 238000003860 storage Methods 0.000 title claims abstract description 81
- 230000009466 transformation Effects 0.000 claims abstract description 472
- 238000013515 script Methods 0.000 claims abstract description 242
- 230000001131 transforming effect Effects 0.000 claims abstract description 19
- 238000006243 chemical reaction Methods 0.000 claims description 63
- 238000004458 analytical method Methods 0.000 claims description 47
- 238000004422 calculation algorithm Methods 0.000 claims description 37
- 238000012795 verification Methods 0.000 claims description 8
- 230000008569 process Effects 0.000 abstract description 35
- 238000004519 manufacturing process Methods 0.000 description 19
- 238000004364 calculation method Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 10
- 230000000694 effects Effects 0.000 description 8
- 238000011161 development Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 238000013178 mathematical model Methods 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 238000013210 evaluation model Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000013179 statistical model Methods 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 3
- 230000001172 regenerating effect Effects 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013075 data extraction Methods 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
- G06F21/6227—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database where protection concerns the structure of data, e.g. records, types, queries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/602—Providing cryptographic facilities or services
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Computer Hardware Design (AREA)
- Computer Security & Cryptography (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Bioethics (AREA)
- Databases & Information Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention provides a model storage method, a model using device and electronic equipment, wherein the model storage method comprises the following steps: splitting the target model into a plurality of model scripts according to a splitting rule; saving a plurality of model scripts to a blockchain; generating a reference chain of a plurality of model scripts, wherein the reference chain comprises a data operation sequence among the plurality of model scripts and a data operation logic corresponding to each model script; determining a transformation node from the block chain, and transforming the reference chain through the transformation node based on the transformation parameter to obtain a transformation chain; the transformation chain is saved. According to the method, the data operation sequence among a plurality of model scripts after the target model is split and the data operation logic corresponding to each model script are encrypted by changing the reference chain, and compared with an encryption method introducing noise in a homomorphic encryption process, the method is simple to operate, the confidentiality of the model can be improved under the condition of not introducing noise, meanwhile, the computational overhead of model decryption is reduced, and the use of the model is facilitated.
Description
Technical Field
The invention relates to the technical field of data encryption, in particular to a model storage method, a model using device and electronic equipment.
Background
Statistical or mathematical models are technical achievements that are costly to produce, especially models that require large data support. In real-time production application, a model provider does not want a model user to master a model because the model user obtains more model information, or main logic and algorithm of the model, so the model needs to be encrypted, and the model user is prevented from obtaining more model information.
In the related technology, the operation process and the algorithm of the model are usually split, and then the split model is encrypted through a homomorphic encryption algorithm, noise needs to be added in the encryption process due to the immaturity of the homomorphic encryption algorithm, so that the encryption effect is improved, but the problems of increased calculation power and information distortion caused by the noise occur in the decryption process, so that the calculation power overhead is increased, and the use of the model is not facilitated.
Disclosure of Invention
The invention aims to provide a model storage method, a model using device and an electronic device, so as to reduce the computational overhead of model encryption and model using.
In a first aspect, an embodiment of the present invention provides a model storage method, where the method includes: splitting the target model into a plurality of model scripts according to a preset splitting rule; saving a plurality of model scripts to a preset block chain; generating a reference chain of a plurality of model scripts; the reference chain comprises a data operation sequence among a plurality of model scripts and a data operation logic corresponding to each model script; determining a transformation node from the block chain, and transforming the reference chain based on a preset transformation parameter through the transformation node to obtain a transformation chain; the transformation chain is saved.
In an optional embodiment, the step of splitting the target model into a plurality of model scripts according to a preset splitting rule includes: determining variables contained in the target model according to the data operation logic of the target model; splitting the target model according to variables contained in the target model, so that one model script corresponds to one variable in a plurality of split model scripts; each model script comprises a first operation sequence identifier and a data operation sequence of a variable corresponding to the model script.
In an optional implementation manner, after the step of splitting the target model according to the variables included in the target model, so that one model script in the multiple split model scripts corresponds to one variable, the method further includes: splitting input data of the target model according to variables contained in the target model, so that one data block corresponds to one variable in a plurality of split data blocks; the data block comprises a second operation sequence identifier and input data of a variable corresponding to the data block; and encrypting the data blocks aiming at each data block, determining a storage node from the block chain, and storing the encrypted data blocks into the storage node.
In an optional embodiment, the step of saving the plurality of model scripts to the preset block chain includes: encrypting each model script to obtain an encrypted packet corresponding to each model script; determining a target node from the block chain, and storing each encrypted packet to the target node; wherein the target node is a node randomly selected from the nodes of the blockchain.
In an optional embodiment, the step of encrypting each model script to obtain an encrypted packet corresponding to each model script includes: encrypting each model script by adopting a homomorphic encryption algorithm to obtain an encrypted packet corresponding to each model script; the step of determining a target node from the blockchain and storing each encrypted packet to the target node includes: and calling preset scheduling nodes, and randomly determining target nodes with the same number as the encrypted packets from the nodes of the block chain so that one target node stores one encrypted packet.
In an alternative embodiment, the reference strand comprises a plurality of base labels, each base label representing an algorithm; the step of generating a reference chain of a plurality of model scripts includes: for each model script, distributing base labels for the model scripts according to the data arithmetic logic of the model scripts; and sequentially connecting the base labels allocated to each model script according to the data operation sequence among the plurality of model scripts to obtain the reference chain.
In an alternative embodiment, the transformation parameters include transformation direction, transformation speed, and pairing transformation rules; the reference chain comprises a plurality of base labels, and each base label represents an algorithm; the step of determining a transformation node from the block chain, and transforming the reference chain based on a preset transformation parameter through the transformation node to obtain a transformation chain includes: sending a transformation instruction to a preset scheduling node based on a preset transformation frequency and a transformation parameter; randomly selecting a plurality of transformation nodes from the nodes of the block chain through a scheduling node; the number of the transformation nodes is the same as the transformation times; converting the base labels in the reference chain according to the conversion speed and the conversion direction through a plurality of conversion nodes to obtain a converted reference chain; and replacing the base labels in the transformed reference strand according to the pairing transformation rule to obtain the transformed strand.
In a second aspect, an embodiment of the present invention provides a model storage method, where the method is applied to a scheduling node, and the method includes: receiving a transformation instruction; the conversion instruction carries conversion parameters, conversion times and a reference chain, wherein the conversion parameters comprise a conversion direction, a conversion speed and a pairing conversion rule; the number of the transformation parameters is the same as the transformation times; the reference chain comprises a plurality of base labels, and each base label represents an algorithm; randomly selecting a plurality of transformation nodes from nodes of a preset block chain; wherein, the number of the transformation nodes is the same as the transformation times; sending the reference chain and the first transformation parameter to a first transformation node, so that the first transformation node transforms the reference chain according to the first transformation parameter to obtain a first base chain; sending the first base chain and the second transformation parameter to a second transformation node so that the second transformation node transforms the first base chain according to the second transformation parameter to obtain a second base chain, continuing to send the second base chain and the third transformation parameter to a third transformation node until the transformation nodes with the same number of times as the transformation are transformed, and returning to a final transformation chain; the transformation chain is saved.
In a third aspect, an embodiment of the present invention provides a model using method, where the method includes: sending a model using instruction to a preset scheduling node, and analyzing a transformation chain corresponding to the target model through the scheduling node to obtain an analyzed reference chain; the transformation chain is obtained by transforming a reference chain based on preset transformation parameters through transformation nodes; the reference chain comprises a data operation sequence among a plurality of model scripts after the target model is split and a data operation logic corresponding to each model script; determining an operation node from the block chain, and calculating an operation result corresponding to each model script through the operation node based on the analyzed reference chain and the analyzed model input data; and combining the operation results corresponding to each model script to obtain a final operation result.
In an optional embodiment, each transformation node stores therein transformation parameters executed by the transformation node; the step of analyzing the transformation chain corresponding to the target model through the scheduling node to obtain the analyzed reference chain includes: randomly determining a plurality of analysis nodes from the nodes of the block chain through a scheduling node; the number of the analysis nodes is the same as that of the transformation nodes; and sending the transformation chain to an analysis node through a scheduling node, and instructing the transformation node to send the transformation parameters to the analysis node so that the analysis node analyzes the transformation chain according to the transformation parameters to obtain an analyzed reference chain.
In an optional implementation manner, after the step of sending the model using instruction to the preset scheduling node and analyzing the transformation chain corresponding to the target model by the scheduling node to obtain the analyzed reference chain, the method further includes: verifying the analyzed reference chain according to a pre-stored reference chain of the target model; if the verification is passed, the step of determining the operation node from the blockchain is executed.
In an optional embodiment, the plurality of encrypted model scripts obtained after the target model is split are stored in a target node of a blockchain, and the plurality of encrypted data blocks obtained after the model input data is split are stored in a storage node of the blockchain; the model script comprises an operation sequence identifier of the model script and a data operation sequence of a variable in the target model; the step of calculating, by the operation node, an operation result corresponding to each model script based on the analyzed reference chain and the model input data includes: sending an operation instruction to a scheduling node so that the scheduling node randomly selects an operation node from the nodes of the block chain; and analyzing the encrypted model script and the encrypted data block through the operation node, substituting the analyzed data block into a variable corresponding to the analyzed model script according to the operation sequence identification to obtain an intermediate variable, and combining an intermediate result with the analyzed model script according to the analyzed reference chain to obtain an operation result corresponding to each model.
In a fourth aspect, an embodiment of the present invention provides a model storage apparatus, including: the model splitting module is used for splitting the target model into a plurality of model scripts according to a preset splitting rule; saving a plurality of model scripts to a preset block chain; the reference chain generating module is used for generating a plurality of reference chains of the model scripts; the reference chain comprises a data operation sequence among a plurality of model scripts and a data operation logic corresponding to each model script; the transformation module is used for determining transformation nodes from the block chain, and transforming the reference chain based on preset transformation parameters through the transformation nodes to obtain a transformation chain; the transformation chain is saved.
In a fifth aspect, an embodiment of the present invention provides a model storage apparatus, where the apparatus is disposed at a scheduling node, and the apparatus includes: the instruction receiving module is used for receiving a conversion instruction; the conversion instruction carries conversion parameters, conversion times and a reference chain, wherein the conversion parameters comprise a conversion direction, a conversion speed and a pairing conversion rule; the number of the transformation parameters is the same as the transformation times; the reference chain comprises a plurality of base labels, and each base label represents an algorithm; the node selection module is used for randomly selecting a plurality of transformation nodes from the nodes of the preset block chain; wherein, the number of the transformation nodes is the same as the transformation times; the first transformation module is used for sending the reference chain and the first transformation parameters to the first transformation node so that the first transformation node transforms the reference chain according to the first transformation parameters to obtain a first base chain; the second transformation module is used for sending the first base chain and the second transformation parameters to a second transformation node so that the second transformation node transforms the first base chain according to the second transformation parameters to obtain a second base chain, the second base chain and the third transformation parameters are continuously sent to a third transformation node until the transformation nodes with the same number of times as the transformation times complete transformation, and the final transformation chain is returned; the transformation chain is saved.
In a sixth aspect, an embodiment of the present invention provides a model using apparatus, including: the instruction sending module is used for sending a model using instruction to a preset scheduling node, and analyzing a transformation chain corresponding to the target model through the scheduling node to obtain an analyzed reference chain; the transformation chain is obtained by transforming a reference chain based on preset transformation parameters through transformation nodes; the reference chain comprises a data operation sequence among a plurality of model scripts after the target model is split and a data operation logic corresponding to each model script; the data operation module is used for determining operation nodes from the block chain, and calculating operation results corresponding to each model script through the operation nodes based on the analyzed reference chain and the analyzed model input data; and the result combination module is used for combining the operation result corresponding to each model script to obtain a final operation result.
In a seventh aspect, an embodiment of the present invention provides an electronic device, where the electronic device includes a processor and a memory, where the memory stores machine executable instructions capable of being executed by the processor, and the processor executes the machine executable instructions to implement the model storage method or the model using method.
In an eighth aspect, embodiments of the present invention provide a machine-readable storage medium storing machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement the model encryption method described above or the model usage method described above.
The embodiment of the invention has the following beneficial effects:
according to the model storage method, the model using device and the electronic equipment, firstly, a target model is split into a plurality of model scripts according to a preset splitting rule; further storing the plurality of model scripts into a preset block chain; regenerating a reference chain of a plurality of model scripts, wherein the reference chain comprises a data operation sequence among the plurality of model scripts and a data operation logic corresponding to each model script; then, determining a transformation node from the block chain, and transforming the reference chain based on a preset transformation parameter through the transformation node to obtain a transformation chain; the transformation chain is saved. According to the method, the data operation sequence among a plurality of model scripts after the target model is split and the data operation logic corresponding to each model script are encrypted by changing the reference chain, compared with an encryption method introducing noise in a homomorphic encryption process, the method is simple to operate, the confidentiality of the model can be improved under the condition of not introducing noise, meanwhile, the calculation cost is reduced in the process of decrypting the model, and the use of the model is facilitated.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention as set forth above.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a model storage method according to an embodiment of the present invention;
FIG. 2 is a flow chart of another model storage method provided by the embodiment of the invention;
FIG. 3 is a flow chart of another model storage method provided by the embodiment of the invention;
FIG. 4 is a flow chart of another model storage method provided by the embodiments of the present invention;
FIG. 5 is a schematic diagram of a chain structure of a reference chain according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of clockwise 3 base labels of a reference strand according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a loop chain structure of a first base chain according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a 2-base label change in a counter-clockwise direction of a first base strand according to an embodiment of the present invention;
FIG. 9 is a schematic illustration of a loop structure of a second base chain according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of clockwise 4 base labels of a second base strand according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of a chain structure of a transformation chain according to an embodiment of the present invention;
FIG. 12 is a flow chart of a method for using a model according to an embodiment of the present invention;
FIG. 13 is a flow chart of another method for using a model provided by an embodiment of the invention;
FIG. 14 is a schematic structural diagram of a model storage device according to an embodiment of the present invention;
FIG. 15 is a schematic structural diagram of another model storage device according to an embodiment of the present invention;
FIG. 16 is a schematic structural diagram of a model using apparatus according to an embodiment of the present invention;
fig. 17 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the development process of the prior art, the key point is that data encryption transmission is often used as the center, so that privacy and intellectual property safety are effectively guaranteed in all links such as data production, use and storage, and with the rise of technologies such as safe multiparty calculation, federal learning and the like, data can be safely shared under the condition that data is available and invisible to a certain extent, and data value transmission is realized.
But at the same time, the technological development for encrypting secure transmissions against models (e.g., statistical or mathematical models, etc.) is somewhat delayed. As a technical result, the model has huge production cost, particularly, the generation cost of a model operation process requiring large data support is higher, and the model production and maintenance usually need continuous investment, so that the model encryption is particularly important.
In real-world production applications, model production and use typically involves the following process:
1. production of models
And (3) the model provider (also called a model developer) gives a logic structure by combining the obtained data source with the existing analysis results and by adopting various methods to refine viewpoints, and finally obtains the model, and puts the model into an actual production link.
2. Model optimization and iteration
After the model provider finishes model production, the model is deployed in a production environment of a data provider (also called a model user), a model operation result is output through data extraction and operation, and the result can be output to the model provider or directly provided to the model user according to a protocol. After model production, the operation result is generally required to be continuously monitored so as to optimize and iterate the model by monitoring data and ensure the accuracy of the model. Particularly for an evaluation model, the model is continuously monitored through the separation capability and the stability level, the running condition of the model is ensured to be controllable, and a better evaluation effect is achieved through multiple optimization and iteration.
3. Replacing old models with new ones
If a new data base (which can be understood as data of a different type from the data source used by the production model) enters the modeling scope, or the object set evaluated by the model is changed greatly, the model needs to be updated completely, and a better effect is achieved by optimizing the new model (which can also be called a challenger model) so as to gradually replace the old model (which can also be called a champion model).
During the production and operation of the model, especially when a new data base is added or the total number of the model evaluation is increased (including a sample set and a subject newly entering the evaluation range), the stability and the separation capability of the model are obviously reduced, so that the model needs to continuously collide with data (including historical data and newly generated data) to continuously check the accuracy of the model.
However, in real production, most model providers do not have data completely, and even the production and data of the model are completely disjointed, a mechanism is needed to ensure that the model and the data interact in a safe and credible environment, on one hand, the data providers ensure that the data are used under the premise of safety and privacy protection, on the other hand, the production process and the operation process of the model are protected, and the possibility of model leakage caused by the fact that a model user or other third parties can master the model or main logic and algorithm of the model due to the fact that more model information is obtained is avoided. Therefore, it is necessary to encrypt the model to avoid the user of the model obtaining more model information.
In real-world production applications, model production and encryption typically suffer from the following problems:
1. there are data sharing issues in model development and operation.
In the process of model development and calculation, in order to obtain more effective accuracy, a model provider often wants to have a larger data base, but data becomes a key capability of the data provider as a production element, and the data is not willing to be provided to the model provider on the premise that the data cannot be ensured to be safe. As a premise of model development and operation, a data provider often wants to have a mechanism that enables data to be protected from security and privacy even if the data leaves the original data environment, and after a model user finishes using the data, the data provider can be sure that the data provider cannot copy or obtain the entire original data set.
2. The existing model encryption and decryption process and technology are too simple.
At present, the existing encryption technology mainly aims at the operation process and the split encryption of a rule of a model, a homomorphic encryption algorithm is usually adopted during encryption, the operation of the encryption of the model is large due to the development hysteresis and the immaturity of the homomorphic encryption algorithm, the encryption effect is not good, background noise is usually added in the encryption process in order to improve the encryption effect, the analysis difficulty caused by the background noise is greatly increased, the computational power overhead is also greatly increased, and therefore, the balance point between the safety and the efficiency is difficult to grasp, and the use of the model is not facilitated.
3. The existing model encryption and decryption technology has few customized components and cannot meet the requirements of multiple scenes.
The existing model encryption and decryption technology is often standardized products, and can not meet the requirements of different customers on different encryption and decryption levels, for example, although encryption schemes can be plugged, the given choices are few, and after multiple schemes are mixed for use, the analysis process is too lengthy, the burden of increasing the computational expense is also exponentially increased, and the security level is difficult to be improved, and the implementation is simple and easy.
Based on the above problems, embodiments of the present invention provide a model storage method, a model using method, an apparatus, and an electronic device, which may be applied in storage and use scenarios of various models, especially in encryption, decryption, and use scenarios of mathematical models. In order to facilitate understanding of the embodiment of the present invention, a detailed description is first given of a model storage method disclosed in the embodiment of the present invention, where the method is applied to an electronic device corresponding to a model provider, and as shown in fig. 1, the method includes the following steps:
step S102, splitting a target model into a plurality of model scripts according to a preset splitting rule; and saving a plurality of model scripts to a preset block chain.
The target model can be a mathematical model or a statistical model, wherein the mathematical model is a mathematical structure generally or approximately expressed by a mathematical language aiming at the characteristic or quantity dependency relationship of a certain object system, and the mathematical structure is a pure relationship structure of a certain system marked by a mathematical symbol; the statistical model is generally a model established by a mathematical statistical method based on probability theory.
The preset splitting rule can be split according to the number of variables in the target model, for example, one variable corresponds to one model script; or splitting according to data operation logic in the target model; splitting and the like can be performed according to a data operation sequence or an algorithm in the target model, and the splitting rule can be set according to a rule set by a model provider and is not specifically limited herein.
After the target model is split into a plurality of model scripts, the split model scripts are stored into a preset block chain, and during specific implementation, the model scripts can be stored on the same node of the block chain; to improve the security of the model, multiple model scripts may also be stored on different nodes of the blockchain, e.g., each model script being stored on a different node. The block chain can be a federation chain, a private chain, a public chain, or the like.
Step S104, generating a plurality of reference chains of model scripts; the reference chain comprises a data operation sequence among a plurality of model scripts and data operation logic corresponding to each model script.
During specific implementation, a reference chain of the plurality of model scripts can be generated according to the data operation logic and the operation sequence of the plurality of model scripts after the target model is split, and the reference chain is used for representing the data operation sequence among the plurality of model scripts and the data operation logic corresponding to each model script.
Step S106, determining a transformation node from the block chain, and transforming the reference chain based on a preset transformation parameter through the transformation node to obtain a transformation chain; the transformation chain is saved.
The transformation node may be a node randomly determined from the block chain, and the node may transform the reference chain according to the transformation parameter to obtain a transformation chain, and store the transformation chain in the block chain for use in the subsequent analysis of the target model. The transformation parameters can comprise transformation rules, transformation times and the like, and specific transformation parameters are set by a model provider.
According to the model storage method provided by the embodiment of the invention, firstly, a target model is split into a plurality of model scripts according to a preset splitting rule; further storing the plurality of model scripts into a preset block chain; regenerating a reference chain of a plurality of model scripts, wherein the reference chain comprises a data operation sequence among the plurality of model scripts and a data operation logic corresponding to each model script; then, determining a transformation node from the block chain, and transforming the reference chain based on a preset transformation parameter through the transformation node to obtain a transformation chain; the transformation chain is saved. According to the method, the data operation sequence among a plurality of model scripts after the target model is split and the data operation logic corresponding to each model script are encrypted by changing the reference chain, compared with an encryption method introducing noise in a homomorphic encryption process, the method is simple to operate, the confidentiality of the model can be improved under the condition of not introducing noise, meanwhile, the calculation cost is reduced in the process of decrypting the model, and the use of the model is facilitated.
The embodiment of the invention also provides another model storage method, which is realized on the basis of the method of the embodiment; the method mainly describes a specific process of splitting a target model into a plurality of model scripts according to a preset splitting rule and storing the plurality of model scripts into a preset block chain (realized by the following steps S202-S208); as shown in fig. 2, the method comprises the steps of:
step S202, determining variables contained in the target model according to the data operation logic of the target model.
In the data expression of the target model, variables and constants are usually included. The constant is an invariant in the target model, and the variable is a variable that changes according to the input data, and the variable is divided into an independent variable and an intermediate variable.
Step S204, splitting the target model according to variables contained in the target model, so that one model script corresponds to one variable in a plurality of split model scripts; each model script comprises a first operation sequence identifier and a data operation sequence of a variable corresponding to the model script.
The first operation sequence identifies an operation sequence used for representing each model script in the data operation logic of the target model, and the data operation sequence of the variable corresponding to the model script represents the operation logic of the variable, for example, the operation logic corresponding to a variable x in the target model is 2 x, and then the data operation sequence of the variable x is 2.
In specific implementation, the model splitting has certain complexity, the algorithms correspond to different independent variables and intermediate variables, for the sake of simplicity, the embodiment of the invention replaces the algorithms of addition, subtraction, multiplication and division, and the intermediate variables are not considered, and it is assumed that the model script split at each step only aims at one independent variable, namely one model script corresponds to one independent variable, and the independent variable is included in the data provided by the data provider.
For example, assume that n model scripts formed after the target model M is split are M1, M2,. and mn, if n is set to be 5, that is, the target model includes 5 variables, and the target model after being split includes 5 model scripts M1, M2, M3, M4 and M5, where 1 to 5 are first operation sequence identifiers corresponding to each model script, and the first operation sequence identifiers represent a sequence of each operation of each model script.
And S206, encrypting each model script to obtain an encrypted packet corresponding to each model script.
The above-mentioned encryption method for the model script may be any encryption method set by the model provider, for example, homomorphic encryption, MD5(Message Digest Algorithm) encryption, and the like. During specific implementation, a homomorphic encryption algorithm can be adopted to encrypt each model script, and an encryption packet corresponding to each model script is obtained.
Step S208, determining a target node from the block chain, and storing each encrypted packet to the target node; wherein the target node is a node randomly selected from the nodes of the blockchain.
In a specific implementation, one or more target nodes may be randomly determined from the blockchain, and the randomly determined target nodes are used to store the encryption packets, that is, multiple encryption packets may be stored in one target node, or each encryption packet may be stored in a different target node.
In a specific implementation, the step S208 may include: and calling preset scheduling nodes, and randomly determining target nodes with the same number as the encrypted packets from the nodes of the block chain so that one target node stores one encrypted packet.
The scheduling node is a node on the blockchain, and the scheduling node can start a random algorithm according to a received encrypted packet storage instruction (the encrypted packet storage instruction carries the number of the encrypted packets), and randomly select target nodes with the same number as the number of the encrypted packets from the nodes of the blockchain, so that one target node stores one encrypted packet.
During specific implementation, the split model script is encrypted, then nodes in the block chain are randomly selected to be stored, and the nodes are taken out in a task scheduling mode when the model script is used.
In a specific implementation, after splitting a target model, splitting and storing model input data are required, and the steps are as follows:
step 10, splitting input data of a target model according to variables contained in the target model, so that one data block corresponds to one variable in a plurality of split data blocks; and the data block comprises a second operation sequence identifier and input data of a variable corresponding to the data block.
Corresponding to the splitting of the target model, splitting the model input data according to the variables contained in the target model, for example, the target model includes 5 variables, the data is split into 5 data blocks including data blocks d1, d2, d3, d4 and d5, where 1-5 are second operation sequence identifiers corresponding to each data block, and the second operation sequence identifiers represent the sequence of each operation of the variables corresponding to each data block.
And 11, encrypting the data blocks aiming at each data block, determining a storage node from the block chain, and storing the encrypted data blocks into the storage node.
And starting a random algorithm through a scheduling node, and randomly selecting storage nodes with the same number as the data blocks from the nodes of the block chain so as to enable one storage node to store one encrypted data block.
During specific implementation, through the client DApp (decentralized application), the data provider can conveniently segment the whole input data to form a plurality of independent data units (equivalent to the data blocks), nodes in a block chain are randomly selected to be stored, the process and the result of randomly selecting the nodes by the client are automatically completed by a background, the data provider does not know the storage position of each data block, other people do not know the storage position, the data are immediately deleted after being used, no trace is left, and the data security is guaranteed. Meanwhile, the data provider can segment the model input data into a plurality of data blocks, randomly select different nodes in a block chain for storage after encryption, call out data units from the nodes when operation is to be performed, and only cache the storage nodes, so that the data can be deleted after use; in addition, the model provider can divide the model into a plurality of model scripts, encrypt the model scripts and then randomly store the model scripts in a distributed manner, burn the model scripts after use, and the safety of model storage is guaranteed.
Step S210, generating a plurality of reference chains of model scripts; the reference chain comprises a data operation sequence among a plurality of model scripts and data operation logic corresponding to each model script.
Step S212, determining a transformation node from the block chain, and transforming the reference chain based on a preset transformation parameter through the transformation node to obtain a transformation chain; the transformation chain is saved.
According to the model storage method, the target model is divided into the plurality of model scripts, the divided model scripts are encrypted to obtain the plurality of encryption packets, and then each encryption packet is randomly stored in different nodes on the block chain, so that the safety of model storage is guaranteed.
The embodiment of the invention also provides another model storage method, which is realized on the basis of the method of the embodiment; the method mainly describes a specific process of generating a reference chain of a plurality of model scripts (realized through steps S304-S306), determines a transformation node from a block chain, and transforms the reference chain through the transformation node based on preset transformation parameters to obtain the specific process of the transformation chain (realized through steps S308-S312); as shown in fig. 3, the method comprises the steps of:
step S302, splitting the target model into a plurality of model scripts according to a preset splitting rule; and encrypting and saving a plurality of model scripts to a preset block chain.
Step S304, aiming at each model script, allocating a base label to each model script according to the data arithmetic logic of the model script; wherein each base label represents an algorithm.
The number of model scripts corresponds to the number of base labels, and usually there are a plurality of base labels, each representing a different algorithm, for example, base label "A, T, C, G" corresponds to "add, subtract, multiply, divide", respectively. In a specific implementation, if the data operation logic of the model script and the corresponding variable is addition operation, a base label A is assigned to the model script.
And step S306, sequentially connecting the base labels distributed by each model script according to the data operation sequence among the plurality of model scripts to obtain a reference chain.
For example, the object model includes 3 model scripts, and the data operation data among the 3 model scripts is: the base number of the model script 1 is A, the base number of the model script 2 is G, the base number of the model script 3 is T, and the reference chain is A-G-T.
Step S308, based on the preset conversion times and conversion parameters, a conversion instruction is sent to a preset scheduling node; the transformation parameters include transformation direction, transformation speed, and pairing transformation rules.
The model provider can set the transformation times, the purpose of the setting of the transformation times is mainly to set a plurality of transformation nodes to participate in the process of transforming the reference chain, and generally, the more the transformation times are, the better the encryption effect on the reference chain is. For example, the number of transformations set by the model provider is 3, i.e., 3 transformation nodes are required to participate in the encryption process.
The transformation parameters comprise transformation direction, transformation speed and pairing transformation rules. Wherein the transformation direction comprises a clockwise direction and a counterclockwise direction; the transformation speed can be understood as how many base labels need to be moved in the transformation direction in the current transformation; the pairing transformation rule is a rule that after the movement is completed, which base label is assigned to the base label on the reference strand, and the original base label, such as "A-A", is assigned, that is, the base label A is replaced by the base label A; the base labels may be paired with new base labels, but only substitutions may be made in the base labels of a given species, such as "A-T", i.e.base label A is replaced with base label T.
Step S310, a plurality of transformation nodes are randomly selected from the nodes of the block chain through the scheduling nodes; the number of transform nodes is the same as the number of transforms.
After the transformation times and the transformation parameters are set by the model provider, the model provider informs the scheduling node (namely sends a transformation instruction to the scheduling node) to start a random algorithm, randomly selects nodes with the same number as the transformation times from the nodes of the block chain as the transformation nodes to participate in the encryption process, and randomly proceeds from which to which end without sequencing the selected transformation nodes.
Step S312, base labels in the reference chain are changed according to the changing direction through a plurality of changing nodes according to the changing speed to obtain a changed reference chain; replacing base labels in the transformed reference chain according to a pairing transformation rule to obtain a transformed chain; the transformation chain is saved.
During specific implementation, each transformation node is provided with a corresponding transformation parameter, one transformation node can transform the reference chain according to the transformation parameters and send a transformation result to the next transformation node, and the next transformation node can transform the received transformation result according to the transformation parameters stored by the next transformation node until all the transformation nodes are transformed, so that a transformation chain is obtained.
According to the model storage method, the plurality of model scripts after the target model is split can be encrypted and stored in the block chain, and then the data operation sequence among the plurality of model scripts after the target model is split and the data operation logic corresponding to each model script are encrypted by converting the reference chain, so that the safety in the model transmission process is further improved. In addition, only two encryption algorithms are adopted in the method, one is used for encrypting the split model script, the other is used for carrying out conversion encryption on the reference chain through the conversion parameters, other encryption algorithms do not need to be added, the conversion parameters can be randomly set, the confidentiality is improved, and compared with other encryption algorithms, the method is simple and easy to implement, low in calculation cost and low in system resource occupation.
The embodiment of the invention also provides another model storage method, which is applied to the scheduling node on the block chain, wherein the scheduling node is in communication connection with the electronic equipment corresponding to the model provider; as shown in fig. 4, the method includes the steps of:
step S402, receiving a conversion instruction; the conversion instruction carries conversion parameters, conversion times and a reference chain, wherein the conversion parameters comprise a conversion direction, a conversion speed and a pairing conversion rule; the number of the transformation parameters is the same as the transformation times; the reference strand comprises a plurality of base labels, each base label representing an algorithm.
Step S404, randomly selecting a plurality of transformation nodes from nodes of a preset block chain; wherein the number of the transformation nodes is the same as the transformation times.
Step S406, the reference chain and the first transformation parameter are sent to the first transformation node, so that the first transformation node transforms the reference chain according to the first transformation parameter to obtain a first base chain.
The first transformation parameter is one of transformation parameters having the same number as the number of times of transformation, and the first transformation node is any one of a plurality of randomly selected transformation nodes. In a specific implementation, the first transformation parameters comprise transformation direction, transformation speed and pairing transformation rules; the first transformation node transforms the base number in the base chain according to the transformation direction based on the transformation speed of the first transformation parameter to obtain a transformed base chain, and replaces the base number in the transformed base chain according to the pairing transformation rule to obtain a first base chain.
Step S408, the first base chain and the second transformation parameter are sent to a second transformation node, so that the second transformation node transforms the first base chain according to the second transformation parameter to obtain a second base chain, the second base chain and the third transformation parameter are continuously sent to a third transformation node until the transformation nodes with the same number of times as the transformation times complete transformation, and the final transformation chain is returned; the transformation chain is saved.
The second transformation parameter is a transformation parameter other than the first transformation parameter among the transformation parameters with the same number as the transformation times, and the second transformation node is a transformation node other than the first transformation node among a plurality of randomly selected transformation nodes. In a specific implementation, the second transformation parameters include transformation direction, transformation speed and pairing transformation rules; and the second transformation node transforms the base labels in the first base chain according to the transformation speed of the second transformation parameter and the transformation direction to obtain a transformed first base chain, and replaces the base labels in the transformed first base chain according to the pairing transformation rule to obtain a second base chain.
The third transformation parameter is a transformation parameter except the first transformation parameter and the second transformation parameter in the transformation parameters with the same number of times as the transformation times, the third transformation node is a transformation node except the first transformation node and the second transformation node in a plurality of randomly selected transformation nodes, the third transformation node transforms the second base chain according to the third transformation parameter to obtain a third base chain, if the number of the transformation nodes is three, the third base chain is a final transformation chain, if the number of the transformation nodes is more than three, the transformation processing is continued according to the above mode, at least all the transformation nodes are completely transformed, and the final transformation chain is obtained.
For the convenience of understanding the embodiment of the present invention, the transformation of the reference chain is described in detail below by taking an example that the number of times of transformation is 3, that is, the scheduling node needs to randomly select 3 transformation nodes from the nodes of the block chain. Firstly, a model provider determines base pairing rules of a target model, such as addition, subtraction, multiplication and division corresponding to base labels 'A, T, C, G', and bases are sequentially paired to a model script, a reference chain generated after pairing is assumed to be 'A-A-C-T-G', the reference chain is connected end to end (clockwise direction) to form a loop chain structure, as shown in FIG. 5, a base label corresponding to a first base in the reference chain is an apex base label A, and subsequent conversion nodes are converted and encrypted by taking the base label A as the reference chain.
Then, the model provider sets the transformation parameters, and sets the first transformation parameter, the second transformation parameter, and the third transformation parameter, which are required to perform the transformation 3 times. Wherein, assuming that a first transformation parameter is assigned to a first transformation node, a second transformation parameter is assigned to a second transformation node, and a third transformation parameter is assigned to a third transformation node, each transformation node may generate a token from the assigned transformation parameters and save the token.
When the first transformation is performed through the first transformation node, assuming that the transformation direction in the first transformation parameter is clockwise, the rotation speed is 3, and the base pairing rule is 'A-T, C-G' (equivalent to that A is replaced by T, and C is replaced by G), a token 1 is generated, the first transformation node rotates clockwise three base labels around the reference chain 'A-A-C-T-G', the formed loop chain structure is shown in FIG. 6, and the vertex base label is changed from A to C; then, the base labels in the base strands obtained in FIG. 6 were base-paired according to the base pairing rule "A-T, C-G", that is, the base labels in the base strands in FIG. 6 were substituted to obtain a first base strand, and the first base strand obtained from the base label at the vertex was "G-C-A-T-T", as shown in FIG. 7.
When the second transformation is performed through the second transformation node, assuming that the transformation direction in the second transformation parameter is counterclockwise, the rotation speed is 2, and the base pairing rule is "a-C, G-T", a token 2 is generated, and after the second transformation node rotates counterclockwise around the first base chain "G-C-a-T" by 2 base labels, the formed loop chain structure is as shown in fig. 8, and the base label at the vertex is changed from C to a; then, the base labels in the base strands obtained in FIG. 8 were base-paired according to the base pairing rule "A-C, G-T" to obtain a second base strand, and the obtained second base strand was "C-G-G-T-A" from the base label at the vert mutex as shown in FIG. 9.
When the third transformation is performed through the third transformation node, assuming that the transformation direction in the third transformation parameter is clockwise, the rotation speed is 4, and the base pairing rule is 'A-A, C-C, G-T', a token 3 is generated, the third transformation node rotates clockwise 4 base labels around the second base chain 'C-G-G-T-A', the formed loop chain structure is shown in FIG. 10, and the base labels at the top points are changed from A to G; then, the base labels in the base strands obtained in FIG. 10 were base-paired according to the base pairing rule "A-A, C-C, G-T" to obtain converted strands, and the converted strands obtained from the base labels at the vertices were "T-T-G-A-C", and were stored in the dispatch nodes, as shown in FIG. 11.
In some embodiments, a random algorithm may also be started by the scheduling node, and 3 token nodes are selected from the nodes of the blockchain, and store token 1, token 2, and token 3, respectively.
According to the model storage method, the operation sequence of the model is encrypted by adopting base pairing and a mode of converting a reference chain, the decryption difficulty is increased in geometric grade every time a base label or conversion frequency is increased, but the decryption method is simple; meanwhile, the transformation times and transformation parameters in the method can be configured according to requirements, so that the encryption degree and the decryption difficulty have flexibility, the calculation cost can be configured, and the occupation of system resources can be customized.
Corresponding to the embodiment of the model storage method, an embodiment of the present invention provides a model using method, which is applied to an electronic device corresponding to a model provider, as shown in fig. 12, and the method includes the following steps:
step S502, sending a model using instruction to a preset scheduling node, and analyzing a transformation chain corresponding to a target model through the scheduling node to obtain an analyzed reference chain; the transformation chain is obtained by transforming the reference chain based on preset transformation parameters through transformation nodes; the reference chain comprises a data operation sequence among a plurality of model scripts after the target model is split and a data operation logic corresponding to each model script.
The scheduling node may parse the stored transformation chain, that is, decrypt the transformation chain according to the transformation parameter of the determined transformation chain, to obtain a parsed reference chain, that is, to make the data operation sequence between the plurality of model scripts after the target model is split and the data operation logic corresponding to each model script.
Step S504, determining an operation node from the block chain, and calculating an operation result corresponding to each model script through the operation node based on the analyzed reference chain and the analyzed model input data.
The operation node can be a node randomly selected from a block chain, the model script and the corresponding variable can be matched through the operation node to obtain a plurality of operation units, and then the input data corresponding to each variable in the model input data is input into the operation unit corresponding to the variable to obtain the operation result corresponding to each model script.
And step S506, combining the operation results corresponding to each model script to obtain a final operation result.
During specific implementation, the operation result corresponding to each model script can be assembled according to the data operation logic of the target model to obtain a final operation result, and the final operation result is returned to the model user or stored in the model provider.
The model using method comprises the steps of firstly sending a model using instruction to a preset scheduling node, and analyzing a transformation chain corresponding to a target model through the scheduling node to obtain an analyzed reference chain; determining an operation node from the block chain, and calculating an operation result corresponding to each model script based on the analyzed reference chain and the analyzed model input data through the operation node; and then combining the operation results corresponding to each model script to obtain a final operation result. The method adopts the transformation chain to encrypt the data operation logic of the target model, and analyzes the transformation chain when the model is used, so that the safety of the distributed operation process of the evaluation model is improved.
The embodiment of the invention also provides another model using method, which is realized on the basis of the method of the embodiment; the method mainly describes a specific process (realized by the following steps S602-S604) of analyzing a transformation chain corresponding to a target model through the scheduling node to obtain an analyzed reference chain, and a specific process (realized by the following steps S608-S610) of determining a transformation node from a block chain, and performing transformation processing on the reference chain through the transformation node based on preset transformation parameters to obtain a transformation chain; as shown in fig. 13, the method includes the steps of:
step S602, sending a model using instruction to a preset scheduling node, and randomly determining a plurality of analysis nodes from the nodes of the block chain through the scheduling node; the number of the analysis nodes is the same as that of the transformation nodes; wherein, each transformation node stores transformation parameters executed by the transformation node.
The transformation parameters stored in each of the transformation nodes may also be referred to as tokens. When the scheduling node receives a model use instruction sent by a model provider, a random algorithm is started, a plurality of analysis nodes are determined from the nodes of the blockchain, the number of the analysis nodes is the same as the number of the transformation nodes when the transformation chain is obtained through the reference chain, for example, the number of the transformation nodes is 3, and then the number of the analysis nodes is also 3.
Step S604, the transformation chain is sent to the analysis node through the scheduling node, and the transformation node is instructed to send the transformation parameter to the analysis node, so that the analysis node analyzes the transformation chain according to the transformation parameter to obtain an analyzed reference chain.
In a specific implementation, the number of the analysis nodes may be multiple, and the transformation chain may be analyzed sequentially through the multiple analysis nodes, and each analysis node may analyze the transformation chain formed by the transformation node corresponding to the transformation parameter according to the received transformation parameter, so as to obtain the analyzed reference chain.
In order to facilitate understanding of the analysis of the transformation chain, the following describes the analysis process in detail by taking 3 analysis nodes, namely, a first analysis node, a second analysis node and a third analysis node as an example, in conjunction with the reference chain transformation process of fig. 5 to 11. Assuming that the transformation chain is "T-G-a-C" as shown in fig. 11, the token 3 is first parsed by the following procedure: and the scheduling node sends the transformation chain 'T-T-G-A-C' formed by the third transformation to the first analysis node, and instructs the third transformation node to send the token 3 to the first analysis node, and the first analysis node analyzes the transformation chain to a state before the third transformation is started, namely the structure shown in the figure 9, according to the transformation parameters given by the token 3, so as to obtain a second base chain 'C-G-G-T-A'.
The process of parsing token 2 is: and the scheduling node sends the second base chain 'C-G-G-T-A' obtained by analyzing the first analysis node to a second analysis node, and instructs a second transformation node to send the token 2 to the second analysis node, and the second analysis node analyzes the second base chain to a state before the second transformation is started, namely the structure shown in figure 7, according to the transformation parameters given by the token 2, so as to obtain the first base chain 'G-C-A-T-T'.
The process of parsing token 1 is: and the scheduling node sends the first base chain 'G-C-A-T-T' obtained by analyzing the second analysis node to a third analysis node, and instructs the first transformation node to send the token 1 to the third analysis node, and the third analysis node analyzes the first base chain to a state before the first transformation is started, namely the structure shown in figure 5, according to the transformation parameters given by the token 1, so as to obtain the analyzed base chain 'A-A-C-T-G'.
Step S606, verifying the analyzed reference chain according to the reference chain of the target model stored in advance.
For example, after the scheduling node obtains the analyzed reference chain, the reference chain is returned to the model provider, the model provider compares the stored reference chain with the received analyzed reference chain, and if the stored reference chain is consistent with the received analyzed reference chain, the verification is passed, that is, the scheduling node completely restores the reference chain corresponding to the operation sequence of the target model; if not, the verification fails and the target model cannot be used.
In step S608, if the verification is passed, an operation instruction is sent to the scheduling node, so that the scheduling node randomly selects an operation node from the nodes of the blockchain.
And step S610, analyzing the encrypted model script and the encrypted data block through the operation node, substituting the analyzed data block into a variable corresponding to the analyzed model script according to the operation sequence identification to obtain an intermediate variable, and combining the intermediate result and the analyzed model script according to the analyzed reference chain to obtain an operation result corresponding to each model.
Storing a plurality of encrypted model scripts after the target model is split in a target node of a block chain, and storing a plurality of encrypted data blocks after the model input data is split in a storage node of the block chain; the model script includes an operation sequence identifier of the model script and a data operation sequence of a variable in the target model.
If the verification is passed, the model provider sends a reduction instruction to the scheduling node, and the scheduling node reduces the data operation order and the data operation logic of the model script according to the analyzed reference chain and the operation rule (i.e. addition, subtraction, multiplication and division) corresponding to each base label in the reference chain defined by the model provider. During specific implementation, the model provider also sends the encrypted model script and the variables to the scheduling node, and the scheduling node can pair the encrypted model script and the variables according to the operation sequence identifier carried by the encrypted model script to form a plurality of operation units, wherein one operation unit corresponds to one variable.
The model provider can also send operation instructions to the scheduling nodes in sequence according to the operation sequence of the analyzed model scripts, the scheduling nodes randomly select operation nodes from the nodes of the block chain in sequence (the number of the operation nodes is consistent with that of the model scripts), the encrypted model scripts and the encrypted data blocks are analyzed, the operation of corresponding operation units is completed, operation results corresponding to each model script are obtained, and the operation results are stored in the corresponding operation nodes.
And step S612, combining the operation results corresponding to each model script to obtain a final operation result.
According to the model using method, the data operation logic of the target model is encrypted by adopting the transformation chain, so that the safety of the distributed operation process of the evaluation model is improved. In addition, input data are segmented, a model is segmented, and encryption is performed through base pairing, so that all nodes of interaction between data and the model, such as storage nodes, operation nodes, intermediate variable management nodes and the like, cannot obtain the full view of the data and the model, and the functional integrity of the data and the model cannot be reversely deduced.
Corresponding to the embodiment of the above model storage method, an embodiment of the present invention provides a model storage apparatus, as shown in fig. 14, including:
the model splitting module 140 is configured to split the target model into a plurality of model scripts according to a preset splitting rule; and saving a plurality of model scripts to a preset block chain.
A reference chain generating module 141, configured to generate reference chains of a plurality of model scripts; the reference chain comprises a data operation sequence among a plurality of model scripts and data operation logic corresponding to each model script.
A transformation module 142, configured to determine a transformation node from the block chain, and transform the reference chain based on a preset transformation parameter through the transformation node to obtain a transformation chain; the transformation chain is saved.
The model storage device firstly splits a target model into a plurality of model scripts according to a preset splitting rule; further storing the plurality of model scripts into a preset block chain; regenerating a reference chain of a plurality of model scripts, wherein the reference chain comprises a data operation sequence among the plurality of model scripts and a data operation logic corresponding to each model script; then, determining a transformation node from the block chain, and transforming the reference chain based on a preset transformation parameter through the transformation node to obtain a transformation chain; the transformation chain is saved. According to the method, the data operation sequence among a plurality of model scripts after the target model is split and the data operation logic corresponding to each model script are encrypted by changing the reference chain, compared with an encryption method introducing noise in a homomorphic encryption process, the method is simple to operate, the confidentiality of the model can be improved under the condition of not introducing noise, meanwhile, the calculation cost is reduced in the process of decrypting the model, and the use of the model is facilitated.
Further, the model splitting module 140 is configured to: determining variables contained in the target model according to the data operation logic of the target model; splitting the target model according to variables contained in the target model, so that one model script corresponds to one variable in a plurality of split model scripts; each model script comprises a first operation sequence identifier and a data operation sequence of a variable corresponding to the model script.
Specifically, the apparatus further includes a data splitting module, configured to: splitting the target model according to variables contained in the target model so that after one model script corresponds to one variable in a plurality of split model scripts, splitting input data of the target model according to the variables contained in the target model so that one data block corresponds to one variable in a plurality of split data blocks; the data block comprises a second operation sequence identifier and input data of a variable corresponding to the data block; and encrypting the data blocks aiming at each data block, determining a storage node from the block chain, and storing the encrypted data blocks into the storage node.
Further, the model splitting module 140 includes: the encryption unit is used for encrypting each model script to obtain an encryption packet corresponding to each model script; the storage unit is used for determining a target node from the block chain and storing each encrypted packet to the target node; wherein the target node is a node randomly selected from the nodes of the blockchain.
Specifically, the encryption unit is configured to encrypt each model script by using a homomorphic encryption algorithm to obtain an encrypted packet corresponding to each model script; the storage unit is configured to invoke a preset scheduling node, and randomly determine target nodes with the same number as the number of the encrypted packets from the nodes of the block chain, so that one target node stores one encrypted packet.
Further, the reference strand comprises a plurality of base labels, each base label representing an algorithm; the reference chain generating module 141 is configured to: for each model script, distributing base labels for the model scripts according to the data arithmetic logic of the model scripts; and sequentially connecting the base labels allocated to each model script according to the data operation sequence among the plurality of model scripts to obtain the reference chain.
In a specific implementation, the transformation parameters include a transformation direction, a transformation speed and a pairing transformation rule; the reference chain comprises a plurality of base labels, and each base label represents an algorithm; the transformation module 142 is configured to: sending a transformation instruction to a preset scheduling node based on a preset transformation frequency and a transformation parameter; randomly selecting a plurality of transformation nodes from the nodes of the block chain through a scheduling node; wherein, the number of the transformation nodes is the same as the transformation times; converting the base labels in the reference chain according to the conversion speed and the conversion direction through a plurality of conversion nodes to obtain a converted reference chain; and replacing the base labels in the transformed reference strand according to the pairing transformation rule to obtain the transformed strand.
The model storage device provided by the embodiment of the present invention has the same implementation principle and technical effect as the foregoing method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the foregoing method embodiments for the parts of the embodiment that are not mentioned in the apparatus embodiments.
Corresponding to the embodiment of the model storage method, an embodiment of the present invention further provides another model storage apparatus, where the apparatus is disposed at a scheduling node, as shown in fig. 15, and the apparatus includes:
an instruction receiving module 150, configured to receive a transformation instruction; the conversion instruction carries conversion parameters, conversion times and a reference chain, wherein the conversion parameters comprise a conversion direction, a conversion speed and a pairing conversion rule; the number of the transformation parameters is the same as the transformation times; the reference strand comprises a plurality of base labels, each base label representing an algorithm.
A node selecting module 151, configured to randomly select a plurality of transform nodes from nodes of a preset block chain; wherein the number of the transformation nodes is the same as the transformation times.
The first transformation module 152 is configured to send the reference chain and the first transformation parameter to the first transformation node, so that the first transformation node transforms the reference chain according to the first transformation parameter to obtain a first base chain.
The second transformation module 153 is configured to send the first base chain and the second transformation parameter to a second transformation node, so that the second transformation node transforms the first base chain according to the second transformation parameter to obtain a second base chain, and continues to send the second base chain and the third transformation parameter to a third transformation node until the transformation nodes with the same number as the transformation times complete transformation, and return to a final transformation chain; saving the transformation chain.
The model storage device encrypts the operation sequence of the model by adopting base pairing and a mode of converting a reference chain, and the decryption difficulty is increased in geometric grade every time a base label or conversion frequency is increased, but the decryption method is simple; meanwhile, the transformation times and transformation parameters in the method can be configured according to requirements, so that the encryption degree and the decryption difficulty have flexibility, the calculation cost can be configured, and the occupation of system resources can be customized.
Corresponding to the embodiment of the model using method, the embodiment of the present invention further provides a model using apparatus, as shown in fig. 16, the apparatus including:
the instruction sending module 160 is configured to send a model use instruction to a preset scheduling node, and analyze a transformation chain corresponding to the target model through the scheduling node to obtain an analyzed reference chain; the transformation chain is obtained by transforming a reference chain based on preset transformation parameters through transformation nodes; the reference chain comprises a data operation sequence among a plurality of model scripts after the target model is split and a data operation logic corresponding to each model script.
And a data operation module 161, configured to determine an operation node from the blockchain, and calculate, through the operation node, an operation result corresponding to each model script based on the analyzed reference chain and the model input data.
And the result combination module 162 is used for combining the operation result corresponding to each model script to obtain a final operation result.
The model using device firstly sends a model using instruction to a preset scheduling node, and the scheduling node analyzes a transformation chain corresponding to a target model to obtain an analyzed reference chain; determining an operation node from the block chain, and calculating an operation result corresponding to each model script based on the analyzed reference chain and the analyzed model input data through the operation node; and then combining the operation results corresponding to each model script to obtain a final operation result. The method adopts the transformation chain to encrypt the data operation logic of the target model, thereby improving the safety of the evaluation model distributed operation process.
Specifically, each transformation node stores transformation parameters executed by the transformation node; the instruction sending module 160 is configured to: randomly determining a plurality of analysis nodes from the nodes of the block chain through a scheduling node; the number of the analysis nodes is the same as that of the transformation nodes; and sending the transformation chain to an analysis node through a scheduling node, and instructing the transformation node to send the transformation parameters to the analysis node so that the analysis node analyzes the transformation chain according to the transformation parameters to obtain an analyzed reference chain.
Further, the apparatus further includes a reference chain verification module configured to: sending a model using instruction to a preset scheduling node, analyzing a transformation chain corresponding to the target model through the scheduling node to obtain an analyzed reference chain, and verifying the analyzed reference chain according to the prestored reference chain of the target model; if the verification is passed, an operational node is determined from the blockchain.
Specifically, a plurality of encrypted model scripts obtained after the target model is split are stored in a target node of the block chain, and a plurality of encrypted data blocks obtained after the model input data is split are stored in a storage node of the block chain; the model script comprises an operation sequence identifier of the model script and a data operation sequence of a variable in the target model; the data operation module 161 is configured to: sending an operation instruction to a scheduling node so that the scheduling node randomly selects an operation node from the nodes of the block chain; and analyzing the encrypted model script and the encrypted data block through the operation node, substituting the analyzed data block into a variable corresponding to the analyzed model script according to the operation sequence identification to obtain an intermediate variable, and combining an intermediate result with the analyzed model script according to the analyzed reference chain to obtain an operation result corresponding to each model.
The model using apparatus provided in the embodiment of the present invention has the same implementation principle and technical effect as those of the foregoing method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the foregoing method embodiments for the parts of the apparatus embodiments that are not mentioned.
An embodiment of the present invention further provides an electronic device, which is shown in fig. 17 and includes a processor 101 and a memory 100, where the memory 100 stores machine executable instructions that can be executed by the processor 101, and the processor 101 executes the machine executable instructions to implement the model storage method or the model using method.
Further, the electronic device shown in fig. 17 further includes a bus 102 and a communication interface 103, and the processor 101, the communication interface 103, and the memory 100 are connected by the bus 102.
The memory 100 may include a high-speed Random Access Memory (RAM) and may further include a non-volatile memory (non-volatile memory), such as at least one disk memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 103 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used. The bus 102 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 17, but that does not indicate only one bus or one type of bus.
The processor 101 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 101. The processor 101 may be a general-purpose processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA), or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 100, and the processor 101 reads the information in the memory 100, and completes the steps of the method of the foregoing embodiment in combination with the hardware thereof.
The embodiment of the present invention further provides a machine-readable storage medium, where the machine-readable storage medium stores machine-executable instructions, and when the machine-executable instructions are called and executed by a processor, the machine-executable instructions cause the processor to implement the model storage method or the model using method.
The model storage method, the model using method, the device and the computer program product of the electronic device provided by the embodiments of the present invention include a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the method described in the foregoing method embodiments, and specific implementations may refer to the method embodiments and are not described herein again.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, an electronic device, or a network device) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (17)
1. A method of model storage, the method comprising:
splitting the target model into a plurality of model scripts according to a preset splitting rule; storing the plurality of model scripts into a preset block chain;
generating a reference chain of the plurality of model scripts; the reference chain comprises a data operation sequence among the plurality of model scripts and a data operation logic corresponding to each model script;
determining a transformation node from the block chain, and transforming the reference chain based on a preset transformation parameter through the transformation node to obtain a transformation chain; saving the transformation chain.
2. The method according to claim 1, wherein the step of splitting the target model into a plurality of model scripts according to a preset splitting rule comprises:
determining variables contained in the target model according to the data operation logic of the target model;
splitting the target model according to variables contained in the target model, so that one model script in a plurality of split model scripts corresponds to one variable; each model script comprises a first operation sequence identifier and a data operation sequence of a variable corresponding to the model script.
3. The method according to claim 2, wherein after the step of splitting the target model according to variables included in the target model, so that one model script of the multiple split model scripts corresponds to one variable, the method further comprises:
splitting input data of the target model according to variables contained in the target model, so that one data block corresponds to one variable in a plurality of split data blocks; the data block comprises a second operation sequence identifier and input data of a variable corresponding to the data block;
and encrypting the data blocks aiming at each data block, determining a storage node from the block chain, and storing the encrypted data blocks into the storage node.
4. The method of claim 1, wherein the step of saving the plurality of model scripts to a predetermined blockchain comprises:
encrypting each model script to obtain an encrypted packet corresponding to each model script;
determining a target node from the block chain, and storing each encrypted packet to the target node; and the target node is a node randomly selected from the nodes of the block chain.
5. The method according to claim 4, wherein the step of encrypting each of the model scripts to obtain an encrypted package corresponding to each of the model scripts comprises:
encrypting each model script by adopting a homomorphic encryption algorithm to obtain an encrypted packet corresponding to each model script;
the step of determining a target node from the blockchain and storing each encrypted packet to the target node includes:
and calling a preset scheduling node, and randomly determining target nodes with the same number as the encryption packets from the nodes of the block chain so that one target node stores one encryption packet.
6. The method of claim 1, wherein the reference strand comprises a plurality of base labels, each of the base labels representing an algorithm;
the step of generating a reference chain of the plurality of model scripts comprises:
for each model script, distributing base labels for the model scripts according to the data operation logic of the model scripts;
and sequentially connecting the base labels distributed by each model script according to the data operation sequence among the plurality of model scripts to obtain a reference chain.
7. The method of claim 1, wherein the transformation parameters include transformation direction, transformation speed, and pairing transformation rules; the reference chain comprises a plurality of base labels, and each base label represents an algorithm;
the step of determining a transformation node from the block chain, and transforming the reference chain based on a preset transformation parameter through the transformation node to obtain a transformation chain includes:
sending a transformation instruction to a preset scheduling node based on a preset transformation frequency and the transformation parameter;
randomly selecting a plurality of transformation nodes from the nodes of the block chain through the scheduling node; wherein the number of the transformation nodes is the same as the number of times of transformation;
converting the base labels in the reference chain according to the conversion speed and the conversion direction through the plurality of conversion nodes to obtain a converted reference chain; and replacing the base labels in the transformed reference chain according to the pairing transformation rule to obtain a transformed chain.
8. A method of model storage, the method comprising:
receiving a transformation instruction; the conversion instruction carries conversion parameters, conversion times and a reference chain, wherein the conversion parameters comprise a conversion direction, a conversion speed and a pairing conversion rule; the number of the transformation parameters is the same as the transformation times; the reference chain comprises a plurality of base labels, and each base label represents an algorithm;
randomly selecting a plurality of transformation nodes from nodes of a preset block chain; wherein the number of the transformation nodes is the same as the number of times of transformation;
sending the reference chain and the first transformation parameter to a first transformation node, so that the first transformation node transforms the reference chain according to the first transformation parameter to obtain a first base chain;
sending the first base chain and the second transformation parameters to a second transformation node so that the second transformation node transforms the first base chain according to the second transformation parameters to obtain a second base chain, continuing sending the second base chain and the third transformation parameters to a third transformation node until the transformation nodes with the same number of times as the transformation times complete transformation, and returning to a final transformation chain; saving the transformation chain.
9. A method of using a model, the method comprising:
sending a model using instruction to a preset scheduling node, and analyzing a transformation chain corresponding to a target model through the scheduling node to obtain an analyzed reference chain; the transformation chain is obtained by transforming a reference chain based on preset transformation parameters through transformation nodes; the reference chain comprises a data operation sequence among a plurality of model scripts after the target model is split and a data operation logic corresponding to each model script;
determining an operation node from a preset block chain, and calculating an operation result corresponding to each model script based on the analyzed reference chain and the analyzed model input data through the operation node;
and combining the operation results corresponding to each model script to obtain a final operation result.
10. The method according to claim 9, wherein each of the transformation nodes has stored therein transformation parameters executed by the transformation node;
the step of analyzing the transformation chain corresponding to the target model through the scheduling node to obtain an analyzed reference chain includes:
randomly determining a plurality of analysis nodes from the nodes of the block chain through the scheduling node; wherein the number of the analysis nodes is the same as the number of the transformation nodes;
and sending the transformation chain to the analysis node through the scheduling node, and instructing the transformation node to send the transformation parameters to the analysis node so that the analysis node analyzes the transformation chain according to the transformation parameters to obtain an analyzed reference chain.
11. The method according to claim 9, wherein after the step of sending a model using instruction to a preset scheduling node, and parsing a transformation chain corresponding to a target model by the scheduling node to obtain a parsed reference chain, the method further comprises:
verifying the analyzed reference chain according to a pre-stored reference chain of the target model;
and if the verification is passed, executing the step of determining the operation node from the block chain.
12. The method of claim 9, wherein the target model split encrypted plurality of model scripts are stored in a target node of the blockchain, and wherein the model input data split encrypted plurality of data blocks are stored in a storage node of the blockchain; the model script comprises an operation sequence identifier of the model script and a data operation sequence of a variable in the target model;
the step of calculating, by the operation node, an operation result corresponding to each model script based on the analyzed reference chain and the model input data includes:
sending an operation instruction to the scheduling node so that the scheduling node randomly selects an operation node from the nodes of the block chain;
and analyzing the encrypted model script and the encrypted data block through the operation node, substituting the analyzed data block into a variable corresponding to the analyzed model script according to the operation sequence identification to obtain an intermediate variable, and combining an intermediate result with the analyzed model script according to the analyzed reference chain to obtain an operation result corresponding to each model.
13. A model storage device, the device comprising:
the model splitting module is used for splitting the target model into a plurality of model scripts according to a preset splitting rule; storing the plurality of model scripts into a preset block chain;
a reference chain generating module for generating a reference chain of the plurality of model scripts; the reference chain comprises a data operation sequence among the plurality of model scripts and a data operation logic corresponding to each model script;
the transformation module is used for determining transformation nodes from the block chain, and transforming the reference chain based on preset transformation parameters through the transformation nodes to obtain a transformation chain; saving the transformation chain.
14. A model storage device, the device comprising:
the instruction receiving module is used for receiving a conversion instruction; the conversion instruction carries conversion parameters, conversion times and a reference chain, wherein the conversion parameters comprise a conversion direction, a conversion speed and a pairing conversion rule; the number of the transformation parameters is the same as the transformation times; the reference chain comprises a plurality of base labels, and each base label represents an algorithm;
the node selection module is used for randomly selecting a plurality of transformation nodes from the nodes of the preset block chain; wherein the number of the transformation nodes is the same as the number of times of transformation;
the first transformation module is used for sending the reference chain and the first transformation parameters to a first transformation node so that the first transformation node transforms the reference chain according to the first transformation parameters to obtain a first base chain;
the second transformation module is used for sending the first base chain and the second transformation parameters to a second transformation node so that the second transformation node transforms the first base chain according to the second transformation parameters to obtain a second base chain, and continuously sending the second base chain and the third transformation parameters to a third transformation node until the transformation nodes with the same number of times as the transformation times complete transformation, and returning to a final transformation chain; saving the transformation chain.
15. A model-using apparatus, the apparatus comprising:
the instruction sending module is used for sending a model using instruction to a preset scheduling node, and analyzing a transformation chain corresponding to a target model through the scheduling node to obtain an analyzed reference chain; the transformation chain is obtained by transforming a reference chain based on preset transformation parameters through transformation nodes; the reference chain comprises a data operation sequence among a plurality of model scripts after the target model is split and a data operation logic corresponding to each model script;
the data operation module is used for determining operation nodes from a preset block chain, and calculating operation results corresponding to each model script through the operation nodes based on the analyzed reference chain and the analyzed model input data;
and the result combination module is used for combining the operation result corresponding to each model script to obtain a final operation result.
16. An electronic device comprising a processor and a memory, the memory storing machine executable instructions executable by the processor, the processor executing the machine executable instructions to implement the model storage method of any one of claims 1 to 8 or the model use method of any one of claims 9-12.
17. A machine-readable storage medium having stored thereon machine-executable instructions which, when invoked and executed by a processor, cause the processor to implement the model storage method of any one of claims 1 to 8 or the model use method of any one of claims 9 to 12.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110213323.9A CN112948875B (en) | 2021-02-25 | 2021-02-25 | Model storage method, model use method, device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110213323.9A CN112948875B (en) | 2021-02-25 | 2021-02-25 | Model storage method, model use method, device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112948875A true CN112948875A (en) | 2021-06-11 |
CN112948875B CN112948875B (en) | 2024-08-20 |
Family
ID=76246229
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110213323.9A Active CN112948875B (en) | 2021-02-25 | 2021-02-25 | Model storage method, model use method, device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112948875B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110543776A (en) * | 2019-08-30 | 2019-12-06 | 联想(北京)有限公司 | model processing method, model processing device, electronic equipment and medium |
CN110995749A (en) * | 2019-12-17 | 2020-04-10 | 北京海益同展信息科技有限公司 | Block chain encryption method and device, electronic equipment and storage medium |
CN110991622A (en) * | 2019-08-22 | 2020-04-10 | 腾讯科技(深圳)有限公司 | Machine learning model processing method based on block chain network and node |
US20200151708A1 (en) * | 2018-11-09 | 2020-05-14 | International Business Machines Corporation | Protection of data trading |
CN111371544A (en) * | 2020-05-27 | 2020-07-03 | 支付宝(杭州)信息技术有限公司 | Prediction method and device based on homomorphic encryption, electronic equipment and storage medium |
CN111414426A (en) * | 2020-03-26 | 2020-07-14 | 北京云图科瑞科技有限公司 | Data processing method and system based on block chain |
-
2021
- 2021-02-25 CN CN202110213323.9A patent/CN112948875B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200151708A1 (en) * | 2018-11-09 | 2020-05-14 | International Business Machines Corporation | Protection of data trading |
CN110991622A (en) * | 2019-08-22 | 2020-04-10 | 腾讯科技(深圳)有限公司 | Machine learning model processing method based on block chain network and node |
CN110543776A (en) * | 2019-08-30 | 2019-12-06 | 联想(北京)有限公司 | model processing method, model processing device, electronic equipment and medium |
CN110995749A (en) * | 2019-12-17 | 2020-04-10 | 北京海益同展信息科技有限公司 | Block chain encryption method and device, electronic equipment and storage medium |
CN111414426A (en) * | 2020-03-26 | 2020-07-14 | 北京云图科瑞科技有限公司 | Data processing method and system based on block chain |
CN111371544A (en) * | 2020-05-27 | 2020-07-03 | 支付宝(杭州)信息技术有限公司 | Prediction method and device based on homomorphic encryption, electronic equipment and storage medium |
Non-Patent Citations (2)
Title |
---|
KRISHNA, S. RAMA等: "Encryption First Split Next Model for Co-tenant Covert Channel Protection", 《 COMPUTATIONAL INTELLIGENCE IN DATA MINING, CIDM 2016》, 29 May 2018 (2018-05-29) * |
任仪;: "基于区块链与人工智能的网络多服务器SIP信息加密系统设计", 计算机科学, no. 1, 15 June 2020 (2020-06-15) * |
Also Published As
Publication number | Publication date |
---|---|
CN112948875B (en) | 2024-08-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Aljawarneh et al. | A multithreaded programming approach for multimedia big data: encryption system | |
Mouchet et al. | Lattigo: A multiparty homomorphic encryption library in go | |
CN111340453B (en) | Federal learning development method, device, equipment and storage medium | |
CN110019075B (en) | Log encryption method and device and log decryption method and device | |
Kanukurthi et al. | Non-malleable randomness encoders and their applications | |
Gong et al. | Homomorphic evaluation of the integer arithmetic operations for mobile edge computing | |
JP2022095852A (en) | Digital signature method, signature information verification method, related device, and electronic device | |
CN112948083A (en) | Data processing method and device and electronic equipment | |
CN112286752A (en) | Algorithm verification method and system for federated learning heterogeneous processing system | |
CN114595483B (en) | Secure multi-party computing method and device, electronic equipment and storage medium | |
CN111246407B (en) | Data encryption and decryption method and device for short message transmission | |
CN116599669A (en) | Data processing method, device, computer equipment and storage medium | |
Schroepfer et al. | Forecasting run-times of secure two-party computation | |
CN116684870B (en) | Access authentication method, device and system of electric power 5G terminal | |
CN112948875B (en) | Model storage method, model use method, device and electronic equipment | |
US20160006563A1 (en) | Encrypted data computation system, device, and program | |
Bellini et al. | New Records of Pre-image Search of Reduced SHA-1 Using SAT Solvers | |
CN116975366A (en) | Data alignment method, device, electronic equipment and readable storage medium | |
CN113468574A (en) | Block chain data uplink method and device | |
CN112751675B (en) | Information monitoring method, system, equipment and storage medium based on block chain | |
De Carvalho | A practical validation of homomorphic message authentication schemes | |
CN110943832A (en) | Data encryption transmission method | |
Jang et al. | A Study on Scalar Multiplication Parallel Processing for X25519 Decryption of 5G Core Network SIDF Function for mMTC IoT Environment | |
CN116933334B (en) | Calculation element authentication method and device based on data operation project | |
CN114218534B (en) | Method, device, equipment and storage medium for checking offline package |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |