CN113792339A - Bidirectional privacy secret neural network model sharing method - Google Patents

Bidirectional privacy secret neural network model sharing method Download PDF

Info

Publication number
CN113792339A
CN113792339A CN202111052963.2A CN202111052963A CN113792339A CN 113792339 A CN113792339 A CN 113792339A CN 202111052963 A CN202111052963 A CN 202111052963A CN 113792339 A CN113792339 A CN 113792339A
Authority
CN
China
Prior art keywords
coefficient
cooperation
retention
model
cooperative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111052963.2A
Other languages
Chinese (zh)
Inventor
张金琳
俞学劢
高航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Shuqin Technology Co Ltd
Original Assignee
Zhejiang Shuqin Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Shuqin Technology Co Ltd filed Critical Zhejiang Shuqin Technology Co Ltd
Priority to CN202111052963.2A priority Critical patent/CN113792339A/en
Publication of CN113792339A publication Critical patent/CN113792339A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioethics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to the technical field of machine learning, in particular to a bidirectional privacy secret neural network model sharing method, which comprises the following steps: establishing a cooperative node; splitting the connection into two connections, and respectively recording the two connections as a reserved connection and a cooperative connection; deleting neurons in an input layer to obtain a sharing model; the model side sends the sharing model to the cooperative node; the data source side generates a cooperation coefficient k1 and a retention coefficient k 2; after receiving the token, the model party distributes a cooperation weight coefficient for cooperation connection; calculating a retention weight coefficient, and sending the retention weight coefficient to a data source side; the data source side sends the cooperation value to the cooperation node; the data source side sends the product of the reserved value and the reserved weight coefficient to the cooperative node; and after the product of all the reserved values and the reserved weight coefficient is obtained, the cooperative node calculates the shared model, obtains the prediction result of the shared model, and sends the prediction result to the data source side. The substantial effects of the invention are as follows: the privacy of the neural network model is kept, and the privacy of data is also protected.

Description

Bidirectional privacy secret neural network model sharing method
Technical Field
The invention relates to the technical field of machine learning, in particular to a bidirectional privacy secret neural network model sharing method.
Background
Machine learning techniques have become an important computer technology that is currently changing technology and economic development deeply. Which is one of the important implementations of machine learning. Neural networks are complex network systems formed by a large number of simple processing units, called neurons, widely interconnected, which reflect many of the basic features of human brain function, and are highly complex nonlinear dynamical learning systems. The neural network has the capabilities of large-scale parallel, distributed storage and processing, self-organization, self-adaptation and self-learning, and is particularly suitable for processing inaccurate and fuzzy information processing problems which need to consider many factors and conditions simultaneously. The neural network has wide and attractive prospect in the fields of system identification, pattern recognition, intelligent control and the like. Regardless of the form of machine learning, a large amount of high-quality data guarantees that machine learning can obtain high accuracy. This is especially true for neural network models. While businesses and organizations have accumulated large amounts of data as informatization has progressed, the distribution of such data is not uniform. Due to reasons of competitive relationships, data privacy protection and the like, enterprises often face difficulty of lack of data when training neural network models at present. Even if the problem of sample data sources is solved, a plurality of enterprises spend a lot of time training neural network models with the same function for the same business requirements, and social resources are wasted. There is thus a need to provide a method of sharing a neural network model that is capable of protecting data privacy and model privacy.
For example, chinese patent CN110222604A, published 2019, 9, 10, a method for identifying object attributes and determining objects quickly and efficiently while maintaining high accuracy, the method comprises the following steps: step S1, preprocessing the image to be detected to obtain a preprocessed image; step S2, inputting the preprocessed image into a multi-attribute recognition shared convolution network model to obtain each object to be judged in the image to be judged and the object attribute of each object to be judged under the attribute category; and step S3, judging the target object according to the object attribute and the preset attribute, wherein the multi-attribute recognition shared convolution network model is composed of an object detection part for detecting the object to be judged from the preprocessed image and an object attribute extraction part for extracting the object attribute of the object to be judged, and the object attribute extraction part is obtained by training the attribute acquisition network model containing the full connection layers corresponding to the attribute types respectively in a round-robin alternating training mode. Although the sharing of the convolutional network model is realized, the method cannot maintain the privacy of the convolutional network model and the privacy of the image to be measured, and cannot be applied to the service type related to private data.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the technical problem that a neural network model sharing scheme capable of protecting privacy is lacked at present. A neural network model sharing method for bidirectional privacy confidentiality is provided.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows: a bidirectional privacy secret neural network model sharing method comprises the following steps: establishing a cooperative node; the model method divides the connection of the neuron in the input layer into two connections which are respectively recorded as a reserved connection and a cooperative connection, and the weights of the reserved connection and the cooperative connection are respectively recorded as a reserved weight coefficient and a cooperative weight coefficient; deleting neurons in an input layer, and respectively establishing a reserved input neuron and a cooperative input neuron for the reserved connection and the cooperative connection to obtain a shared model; the model side sends the sharing model to the cooperative node; the data source side generates a cooperation coefficient k1 and a retention coefficient k2, sends the cooperation coefficient k1 and the retention coefficient k2 to the model side, and transfers a plurality of tokens to an account of the model side; after receiving the token, the model party distributes a cooperation weight coefficient for cooperation connection; calculating a retention weight coefficient according to the cooperation weight coefficient, the cooperation coefficient k1, the retention coefficient k2 and the original connection weight coefficient, and sending the retention weight coefficient to the data source side; the data source side multiplies the input number x by a cooperation coefficient k1 to obtain a cooperation value, and sends the cooperation value to a cooperation node, and the cooperation node takes the cooperation value as a value of a cooperation input neuron; the data source side multiplies the input number x by a retention coefficient k2 to obtain a retention value, and sends the product of the retention value and the retention weight coefficient to the cooperative node; and after the product of the reserve value corresponding to the original connections of all the input layer neurons and the reserve weight coefficient is obtained, the cooperative node calculates the shared model, obtains the prediction result of the shared model, and sends the prediction result to the data source side.
Preferably, the data source side independently generates a cooperation coefficient ki1 and a retention coefficient ki2 for each connection related to the input layer to form a coefficient pair (ki1, ki2), wherein i represents the serial number of the connection related to the input layer; sending the set of coefficient pairs (ki1, ki2) to the model side; the model side generates a corresponding cooperation weight coefficient wi _ c for each coefficient pair (ki1, ki2), calculates a retention weight coefficient wi _ r, and transmits the set of retention weight coefficients wi _ r to the data source side.
Preferably, the model side establishes a history table, and records the received collaboration coefficient ki1, the retention coefficient ki2, the correspondingly generated collaboration weight coefficient wi _ c and the retention weight coefficient wi _ r for each connection i related to the input layer; when the same collaboration coefficient ki1 and the same retention coefficient ki2 of the connection i recorded in the history table are received again, the same wi _ c as that in the history table record is generated for the collaborative connection; the retention weight coefficient wi _ r recorded in the history table is sent to the data source side.
Preferably, the model side adds a random interference amount to the weight coefficient of the original connection, the ratio of the interference amount to the weight coefficient of the original connection is smaller than a preset threshold value, and the retention weight coefficient is calculated according to the cooperation weight coefficient, the cooperation coefficient k1, the retention coefficient k2 and the weight coefficient of the original connection after the interference amount is added, and is sent to the data source side.
Preferably, a plurality of cooperative nodes are established, and the number of the cooperative nodes is matched with the number of layers of the neural network model; each layer of the sharing model is sent to a corresponding cooperative node; the cooperative input neurons, the cooperative weight coefficients and the product of the reserved value and the reserved weight coefficients are sent to the cooperative nodes corresponding to the input layer; and after the cooperative node obtains the output of each layer of neuron of the shared model, the cooperative node sends the output to the cooperative node corresponding to the next layer of neuron.
A bidirectional privacy secret neural network model sharing method comprises the following steps: establishing a cooperative node; the model method divides the connection of the neuron in the input layer into a plurality of connections which are respectively recorded as reserved connections and a plurality of cooperative connections, and the weights of the reserved connections and the cooperative connections are respectively recorded as reserved weight coefficients and cooperative weight coefficients; deleting neurons in an input layer, and respectively establishing a reserved input neuron and a cooperative input neuron for the reserved connection and the cooperative connection to obtain a shared model; the model side sends the sharing model to the cooperative node; the data source side generates a plurality of cooperation coefficients kj1 and retention coefficients k2, the cooperation coefficients kj1 and the retention coefficients k2 are sent to the model side, and a plurality of tokens are transferred to an account of the model side; after receiving the token, the model party distributes a cooperation weight coefficient w _ jc to the cooperation connection; calculating a retention weight coefficient w _ r according to the cooperation weight coefficient, the cooperation coefficient kj1, the retention coefficient k2 and the original connection weight coefficient, and sending the retention weight coefficient w _ r to a data source side; the data source side multiplies the input number x by a cooperation coefficient kj1 to obtain cooperation values, and sends the cooperation values to cooperation nodes, and the cooperation nodes respectively use the cooperation values as values of corresponding cooperation input neurons; the data source side multiplies the input number x by a retention coefficient k2 to be used as a retention value, and sends the product of the retention value and a retention weight coefficient w _ r to the selected cooperative node; all cooperative nodes send the product of the cooperative value and the corresponding cooperative weight coefficient w _ jc to the selected cooperative node; and selecting a cooperative node to solve the sharing model, obtaining a prediction result of the sharing model, and sending the prediction result to a data source side.
Preferably, the model side establishes a history table, and records the received cooperation coefficient k1, the retention coefficient k2, the correspondingly generated cooperation weight coefficient w _ jc and the retention weight coefficient w _ r; when the same cooperation coefficient k1 and the retention coefficient k2 recorded in the history list are received again, the cooperation weight coefficient w _ jc which is the same as that in the history list record is generated for cooperation connection; and sending the retention weight coefficient w _ r recorded in the history table to the data source side.
The substantial effects of the invention are as follows: the method has the advantages that the connection of the neuron of the input layer is divided into two connections, the equivalent weight coefficient is formed by the cooperation coefficient, the retention coefficient, the cooperation weight coefficient and the retention weight coefficient of the two connections, the weight of the input layer of the original neural network model is hidden, a data source side cannot obtain a complete neural network model, the data of the data source side is divided into a cooperation value and a retention value, the data are hidden at the same time, and in the sharing process of the neural network model, the privacy of the neural network model is kept, and the privacy of the data is also protected.
Drawings
Fig. 1 is a schematic flow chart of a neural network model sharing method according to an embodiment.
FIG. 2 is an embodiment of a neural network model to be shared.
FIG. 3 is a schematic diagram of one embodiment of neuronal connectivity.
FIG. 4 is a diagram illustrating a sharing model according to an embodiment.
FIG. 5 is a flowchart illustrating a neural network model sharing method according to an embodiment.
Fig. 6 is a schematic diagram illustrating a connection and a disconnection according to the second embodiment.
Wherein: 11. neural network model, 12, shared model, 13, reserved connections, 14, cooperative connections.
Detailed Description
The following provides a more detailed description of the present invention, with reference to the accompanying drawings.
The first embodiment is as follows:
a method for sharing a neural network model with two-way privacy and privacy, referring to fig. 1, comprising the following steps:
step A01) establishing a cooperative node;
step A02), splitting the connection of the input layer neuron into two connections by the model party, respectively recording the two connections as a reserved connection and a cooperative connection, and respectively recording the weights of the reserved connection and the cooperative connection as a reserved weight coefficient and a cooperative weight coefficient;
step A03) deleting neurons in an input layer, and respectively establishing a reserved input neuron and a cooperative input neuron for the reserved connection and the cooperative connection to obtain a shared model;
step A04) the model party sends the sharing model to the cooperative node;
step A05) the data source side generates a cooperation coefficient k1 and a retention coefficient k2, sends the cooperation coefficient k1 and the retention coefficient k2 to the model side, and transfers a plurality of tokens to the account of the model side;
step A06), after receiving the token, the model party distributes a cooperation weight coefficient for the cooperation connection;
step A07) calculating a retention weight coefficient according to the cooperation weight coefficient, the cooperation coefficient k1, the retention coefficient k2 and the original connection weight coefficient, and sending the retention weight coefficient to a data source side;
step A08) the data source side multiplies the input number x by a cooperation coefficient k1 to be used as a cooperation value and sends the cooperation value to a cooperation node, and the cooperation node takes the cooperation value as the value of a cooperation input neuron;
step A09) the data source side multiplies the input number x by a retention coefficient k2 to be used as a retention value, and the data source side sends the product of the retention value and the retention weight coefficient to the cooperative node;
step A10), after the product of the reserve value corresponding to the original connection of all the input layer neurons and the reserve weight coefficient is obtained, the cooperative node calculates the shared model, obtains the prediction result of the shared model, and sends the prediction result to the data source side.
The data source side independently generates a cooperation coefficient ki1 and a retention coefficient ki2 for each connection related to the input layer to form a coefficient pair (ki1, ki2), wherein i represents the serial number of the connection related to the input layer; sending the set of coefficient pairs (ki1, ki2) to the model side; the model side generates a corresponding cooperation weight coefficient wi _ c for each coefficient pair (ki1, ki2), calculates a retention weight coefficient wi _ r, and transmits the set of retention weight coefficients wi _ r to the data source side.
Referring to fig. 2, the neural network model 11 shared in this embodiment has two hidden layers, an input layer is generally referred to as a0 th layer, the hidden layers are correspondingly referred to as a1 st layer and a 2 nd layer, and the last layer is an output layer. The output layer has two neurons, i.e. is able to output two fields. The input layer has 3 neurons, involving 3 input numbers, denoted x1, x2, and x3, respectively. Where the input value of the hidden layer is a weighted sum of the values of the connected input layer neurons, substituting the result of the activation function. As output of the 1 st neuron of layer 1: y1= Sigmod (Σ w1i × xi + b1), and reference numeral 1 denotes the 1 st neuron of the 1 st layer, and i takes a value of 1 to 3. The Sigmod function is the activation function of the neural network model 11 that is commonly used.
Taking the 1 st neuron at layer 1 as an example, it involves 3 connections, as shown in fig. 3. The 3 neurons of the input layer are respectively connected, the corresponding weight coefficients are w11, w12 and w13, and the 3 connections related to the neurons are respectively split into 2 connections. As shown in fig. 4, 3 connections involved in the 1 st neuron of layer 1 are split into a reserved connection 13 and a cooperative connection 14. So that the connections of the 1 st neuron of level 1 are changed from 3 to 6, the corresponding output y1= Sigmod (Σ w1_ ri × xi _ r + ∑ w1_ ci × xi _ c + b 1). Taking the 1 st neuron of the input layer as an example, the original input is x1, and the number of the 1 st neurons participating in the 1 st layer is equal to x1 × w 11. When splitting into the reserved connection 13 and the cooperative connection 14, the number of the 1 st neurons participating in the level 1 is equal to x1_ r1 w1_ r1+ x1_ c1 w1_ c1= k1 x1 w1_ r1+ k1 x1 w1_ c1, the equivalent weight coefficient is k1 w1_ r1+ k1 w1_ c1, and the equivalent weight coefficient is equal to the weight coefficient w1 of the original connection, that is: w11= k2 w1_ r1+ k1 w1_ c 1. The cooperation coefficient k1 and the retention coefficient k2 are generated by the data source side, and the cooperation weight coefficient w1_ c1 is generated by the model side. The retention weight coefficient w1_ r1 is calculated from the equation w11= k2 w1_ r1+ k1 w1_ c 1.
Since the modeler does not know the collaboration values, i.e., the values of x1 × k1, the modeler cannot know the specific values of x1, so that the input data is kept secret. Similarly, the data source side does not know the cooperative weight coefficient w1_ c1, and therefore, the equivalent weight coefficient, namely the originally connected weight coefficient w11 cannot be solved, so that the partial connection weight coefficients of the neural network model 11 are kept secret. The data source side can only use the data once, and the model side is applied for a retention weight coefficient w1_ r1 once. If the model side does not provide the retention weight coefficients w1_ r1, the data source side will immediately be unable to correctly use the shared model 12 to obtain the predicted result. The data source side cannot be separated from the model side, and the shared model 12 is used independently. Meanwhile, the model side is guaranteed to share the neural network model 11 every time, and benefits can be obtained. w1i = k2 w1_ ri + k1 w1_ ci, then Sigmod (∑ w1_ ri _ xi _ r + ∑ w1_ ci _ xi _ c + b1) = Sigmod (∑ w1_ ri _ k2 xi + Σ w1_ ci _ k1 xi + b1) = Sigmod (∑ w1i xi + b1), can be established, i.e. after splitting the connection into cooperative connections 14 and reserved connections 13, the outcome of the prediction of the neural network model 11 is not affected. The computation of the reserved weight coefficients only involves addition and multiplication, and homomorphic encryption techniques of addition and multiplication belong to the prior art in the field, and are not described herein.
If the value x1=16 of the input neuron and the weight coefficient of the original connection is w11=0.9, after splitting into the cooperative connection 14 and the reserved connection 13, the data source side generates the cooperative coefficient k1=0.6 and the reserved coefficient k2= 0.8. The cooperative weight coefficient w1_ c1=0.5 generated by the model side for the cooperative connection 14, then according to the equation: w11= k2 w1_ r1+ k1 w1_ c1, i.e.: 0.9=0.8 × w1_ r1+0.6 × 0.5 obtains the retention weight coefficient w1_ r1=0.75, and transmits the retention weight coefficient 0.75 to the data source side. The weighted sum of the cooperation value and the reserve value is then: 16 × 0.8 × 0.75+16 × 0.6 × 0.5=14.4=16 × 0.9, just equivalent to the effect of the original connection.
The model party establishes a history table, and records the received collaboration coefficient ki1, the retention coefficient ki2, the correspondingly generated collaboration weight coefficient wi _ c and the retention weight coefficient wi _ r for each connection i related to the input layer; when the same collaboration coefficient ki1 and the same retention coefficient ki2 of the connection i recorded in the history table are received again, the same wi _ c as that in the history table record is generated for the collaborative connection; the retention weight coefficient wi _ r recorded in the history table is sent to the data source side.
And the model party adds a random interference amount to the weight coefficient of the original connection, the ratio of the interference amount to the weight coefficient of the original connection is smaller than a preset threshold value, and the retention weight coefficient is calculated according to the cooperation weight coefficient, the cooperation coefficient k1, the retention coefficient k2 and the weight coefficient of the original connection after the interference amount is added and is sent to the data source party. The data source side cannot establish a multivariate equation set by calling the sharing model 12 for multiple times, and the equivalent weight coefficient is solved. By looking up the history records, if the proportionality coefficients k are the same, the same retention weight coefficient is returned, so that a plurality of multivariate equations are the same, an equation set cannot be constructed, and the privacy and safety of the shared model 12 are guaranteed.
Preferably, a plurality of cooperative nodes are established in the embodiment, and the number of the cooperative nodes is matched with the number of layers of the neural network model; each layer of the sharing model is sent to a corresponding cooperative node; the cooperative input neurons, the cooperative weight coefficients and the product of the reserved value and the reserved weight coefficients are sent to the cooperative nodes corresponding to the input layer; and after the cooperative node obtains the output of each layer of neuron of the shared model, the cooperative node sends the output to the cooperative node corresponding to the next layer of neuron. The privacy of the neural network model can be further protected.
In the embodiment, the connection of the neuron of the input layer is split into two connections, and the cooperation coefficient, the retention coefficient, the cooperation weight coefficient and the retention weight coefficient of the two connections form an equivalent weight coefficient, so that the weight of the input layer of the original neural network model is hidden, a data source side cannot obtain a complete neural network model, and the data of the data source side is split into a cooperation value and a retention value, so that the data is also hidden at the same time, and in the sharing process of the neural network model, the privacy of the neural network model is kept, and the privacy of the data is also protected.
Example two:
compared with the first embodiment, the embodiment provides further improvement on splitting of connection. Referring to fig. 5, the method includes the following steps:
step B01) establishing a cooperative node;
step B02), the model side splits the connection of the input layer neuron into a plurality of connections, which are respectively recorded as reserved connections and a plurality of cooperative connections, and the weights of the reserved connections and the cooperative connections are respectively recorded as reserved weight coefficients and cooperative weight coefficients;
step B03) deleting the input layer neurons, and respectively establishing a reserved input neuron and a cooperative input neuron for the reserved connection and the cooperative connection to obtain a shared model;
step B04) the model party sends the sharing model to the cooperative node;
step B05) the data source side generates a plurality of cooperation coefficients kj1 and retention coefficients k2, the cooperation coefficients kj1 and the retention coefficients k2 are sent to the model side, and a plurality of tokens are transferred to the account of the model side;
step B06), after receiving the token, the model party distributes a cooperation weight coefficient w _ jc for the cooperation connection;
step B07), calculating a retention weight coefficient w _ r according to the cooperation weight coefficient, the cooperation coefficient kj1, the retention coefficient k2 and the original connection weight coefficient, and sending the retention weight coefficient w _ r to a data source side;
step B08) the data source side multiplies the input number x by the cooperation coefficient kj1, and sends the result as a cooperation value to the cooperation nodes, and the cooperation nodes respectively take the cooperation value as the value of the corresponding cooperation input neuron;
step B09) the data source side multiplies the input number x by a retention coefficient k2 to be used as a retention value, and the data source side sends the product of the retention value and a retention weight coefficient w _ r to the selected cooperative node;
step B10), all the cooperative nodes send the product of the cooperative value and the corresponding cooperative weight coefficient w _ jc to the selected cooperative node, the selected cooperative node resolves the shared model, obtains the prediction result of the shared model, and sends the prediction result to the data source side.
The difference between this embodiment and the first embodiment is that the first embodiment limits splitting the connection into 2 connections, and this embodiment adopts splitting the connection into 2 or more than 2 connections. As shown in fig. 6, the connection is split into 4 connections, 1 reserved connection 13 and 3 cooperative connections 14. Since the steps executed when splitting into a plurality of connections are the same as 2 connections, the difference is only that the cooperation weight coefficient and the cooperation value need to be allocated more.
Example three:
a two-way privacy secret neural network model sharing method is characterized in that a first bank trains a neural network model 11 with an anti-money laundering function by using business data owned by the first bank, namely, the prediction probability of money laundering activities of a depositor account is output according to account running data of the depositor. The bank armor is established for a long time and has a large business scale, so that enough data are provided to train the prediction accuracy of the neural network model 11 with the money laundering prevention function to meet the use requirement. The bank B is small in scale and insufficient in data volume, and the neural network model 11 meeting the requirements is difficult to independently train. But bank b also needs to take over the task of anti-money laundering. Similarly, even more banks of smaller size, similar to bank b, are faced with the difficulty of building and training the neural network model 11 with anti-money laundering function independently. Therefore, the bank A and the bank B adopt the scheme recorded in the embodiment to share the neural network model 11, and the problem that the accuracy and the efficiency of executing the anti-money laundering task by the bank B are low is solved.
The anti-money laundering model cannot be revealed, otherwise lawless persons can operate the savings account aiming at the anti-money laundering model to try to avoid the recognition of the anti-money laundering model. The bank A splits the connection related to the input layer of the trained neural network model 11 into a cooperative connection 14 and a reserved connection 13, establishes a cooperative input neuron and a reserved input neuron for the cooperative connection 14 and the reserved connection 13 respectively, and deletes the original input layer. And respectively sending the cooperative input neurons and the reserved input neurons to the cooperative node and the bank B. The bank B generates a proportionality coefficient k for each input number, homomorphic encryption is carried out on the proportionality coefficient k, the homomorphic encryption is sent to the intelligent contract, and after the bank A obtains the homomorphic encryption proportionality coefficient k, a cooperation weight coefficient is generated firstly. And then the cooperation weight coefficient is sent to the cooperation node. And homomorphically encrypting the cooperative weight coefficient. Under homomorphic encryption, homomorphic encryption values of the reserved weight coefficients are obtained through calculation. And sending the homomorphic encryption value of the reserved weight coefficient to the intelligent contract. After the intelligent contract is decrypted, the encrypted intelligent contract is encrypted by using the public key of the bank B and is sent to the bank B. And after the bank B obtains the retention weight coefficient, calculating to obtain a cooperation value and a retention value. And sending the cooperation value to the cooperation node, and sending the product of the reserved value and the reserved weight coefficient to the cooperation node. At this time, the cooperative node obtains enough information, can continuously solve the sharing model 12, obtains the prediction result of the sharing model 12, is the probability of money laundering activity of the depositor corresponding to the input data of the bank B, and provides guidance for the anti-money laundering task.
The above-described embodiments are only preferred embodiments of the present invention, and are not intended to limit the present invention in any way, and other variations and modifications may be made without departing from the spirit of the invention as set forth in the claims.

Claims (7)

1. A two-way privacy secret neural network model sharing method is characterized in that,
the method comprises the following steps:
establishing a cooperative node;
the model method divides the connection of the neuron in the input layer into two connections which are respectively recorded as a reserved connection and a cooperative connection, and the weights of the reserved connection and the cooperative connection are respectively recorded as a reserved weight coefficient and a cooperative weight coefficient;
deleting neurons in an input layer, and respectively establishing a reserved input neuron and a cooperative input neuron for the reserved connection and the cooperative connection to obtain a shared model;
the model side sends the sharing model to the cooperative node;
the data source side generates a cooperation coefficient k1 and a retention coefficient k2, sends the cooperation coefficient k1 and the retention coefficient k2 to the model side, and transfers a plurality of tokens to an account of the model side;
after receiving the token, the model party distributes a cooperation weight coefficient for cooperation connection;
calculating a retention weight coefficient according to the cooperation weight coefficient, the cooperation coefficient k1, the retention coefficient k2 and the original connection weight coefficient, and sending the retention weight coefficient to the data source side;
the data source side multiplies the input number x by a cooperation coefficient k1 to obtain a cooperation value, and sends the cooperation value to a cooperation node, and the cooperation node takes the cooperation value as a value of a cooperation input neuron;
the data source side multiplies the input number x by a retention coefficient k2 to obtain a retention value, and sends the product of the retention value and the retention weight coefficient to the cooperative node;
and after the product of the reserve value corresponding to the original connections of all the input layer neurons and the reserve weight coefficient is obtained, the cooperative node calculates the shared model, obtains the prediction result of the shared model, and sends the prediction result to the data source side.
2. The neural network model sharing method for two-way privacy protection according to claim 1,
the data source side independently generates a cooperation coefficient ki1 and a retention coefficient ki2 for each connection related to the input layer to form a coefficient pair (ki1, ki2), wherein i represents the serial number of the connection related to the input layer;
sending the set of coefficient pairs (ki1, ki2) to the model side;
the model side generates a corresponding cooperation weight coefficient wi _ c for each coefficient pair (ki1, ki2), calculates a retention weight coefficient wi _ r, and transmits the set of retention weight coefficients wi _ r to the data source side.
3. The neural network model sharing method for two-way privacy protection according to claim 2,
the model party establishes a history table, and records the received collaboration coefficient ki1, the retention coefficient ki2, the correspondingly generated collaboration weight coefficient wi _ c and the retention weight coefficient wi _ r for each connection i related to the input layer;
when the same collaboration coefficient ki1 and the same retention coefficient ki2 of the connection i recorded in the history table are received again, the same wi _ c as that in the history table record is generated for the collaborative connection;
the retention weight coefficient wi _ r recorded in the history table is sent to the data source side.
4. The method for sharing neural network model with two-way privacy protection according to claims 1 to 3,
and the model party adds a random interference amount to the weight coefficient of the original connection, the ratio of the interference amount to the weight coefficient of the original connection is smaller than a preset threshold value, and the retention weight coefficient is calculated according to the cooperation weight coefficient, the cooperation coefficient k1, the retention coefficient k2 and the weight coefficient of the original connection after the interference amount is added and is sent to the data source party.
5. The method for sharing neural network model with two-way privacy protection according to claims 1 to 3,
establishing a plurality of cooperative nodes, wherein the number of the cooperative nodes is matched with the number of layers of the neural network model;
each layer of the sharing model is sent to a corresponding cooperative node;
the cooperative input neurons, the cooperative weight coefficients and the product of the reserved value and the reserved weight coefficients are sent to the cooperative nodes corresponding to the input layer;
and after the cooperative node obtains the output of each layer of neuron of the shared model, the cooperative node sends the output to the cooperative node corresponding to the next layer of neuron.
6. A two-way privacy secret neural network model sharing method is characterized in that,
the method comprises the following steps:
establishing a cooperative node;
the model method divides the connection of the neuron in the input layer into a plurality of connections which are respectively recorded as reserved connections and a plurality of cooperative connections, and the weights of the reserved connections and the cooperative connections are respectively recorded as reserved weight coefficients and cooperative weight coefficients;
deleting neurons in an input layer, and respectively establishing a reserved input neuron and a cooperative input neuron for the reserved connection and the cooperative connection to obtain a shared model;
the model side sends the sharing model to the cooperative node;
the data source side generates a plurality of cooperation coefficients kj1 and retention coefficients k2, the cooperation coefficients kj1 and the retention coefficients k2 are sent to the model side, and a plurality of tokens are transferred to an account of the model side;
after receiving the token, the model party distributes a cooperation weight coefficient w _ jc to the cooperation connection;
calculating a retention weight coefficient w _ r according to the cooperation weight coefficient, the cooperation coefficient kj1, the retention coefficient k2 and the original connection weight coefficient, and sending the retention weight coefficient w _ r to a data source side;
the data source side multiplies the input number x by a cooperation coefficient kj1 to obtain cooperation values, and sends the cooperation values to cooperation nodes, and the cooperation nodes respectively use the cooperation values as values of corresponding cooperation input neurons;
the data source side multiplies the input number x by a retention coefficient k2 to be used as a retention value, and sends the product of the retention value and a retention weight coefficient w _ r to the selected cooperative node;
all cooperative nodes send the product of the cooperative value and the corresponding cooperative weight coefficient w _ jc to the selected cooperative node;
and selecting a cooperative node to solve the sharing model, obtaining a prediction result of the sharing model, and sending the prediction result to a data source side.
7. The method of claim 6, wherein the neural network model sharing method for two-way privacy protection is further characterized in that,
the model party establishes a history table and records the received cooperation coefficient k1, the retention coefficient k2, the correspondingly generated cooperation weight coefficient w _ jc and the retention weight coefficient w _ r;
when the same cooperation coefficient k1 and the retention coefficient k2 recorded in the history list are received again, the cooperation weight coefficient w _ jc which is the same as that in the history list record is generated for cooperation connection;
and sending the retention weight coefficient w _ r recorded in the history table to the data source side.
CN202111052963.2A 2021-09-09 2021-09-09 Bidirectional privacy secret neural network model sharing method Pending CN113792339A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111052963.2A CN113792339A (en) 2021-09-09 2021-09-09 Bidirectional privacy secret neural network model sharing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111052963.2A CN113792339A (en) 2021-09-09 2021-09-09 Bidirectional privacy secret neural network model sharing method

Publications (1)

Publication Number Publication Date
CN113792339A true CN113792339A (en) 2021-12-14

Family

ID=79182799

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111052963.2A Pending CN113792339A (en) 2021-09-09 2021-09-09 Bidirectional privacy secret neural network model sharing method

Country Status (1)

Country Link
CN (1) CN113792339A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109194507A (en) * 2018-08-24 2019-01-11 曲阜师范大学 The protection privacy neural net prediction method of non-interactive type
CN109299306A (en) * 2018-12-14 2019-02-01 央视国际网络无锡有限公司 Image search method and device
CN110222604A (en) * 2019-05-23 2019-09-10 复钧智能科技(苏州)有限公司 Target identification method and device based on shared convolutional neural networks
JP2020052813A (en) * 2018-09-27 2020-04-02 Kddi株式会社 Neural network, information addition device, learning method, information addition method and program
CN112116478A (en) * 2020-09-28 2020-12-22 中国建设银行股份有限公司 Method and device for processing suspicious bank anti-money-laundering report
CN112182649A (en) * 2020-09-22 2021-01-05 上海海洋大学 Data privacy protection system based on safe two-party calculation linear regression algorithm
CN112183730A (en) * 2020-10-14 2021-01-05 浙江大学 Neural network model training method based on shared learning
CN112883387A (en) * 2021-01-29 2021-06-01 南京航空航天大学 Privacy protection method for machine-learning-oriented whole process
CN112949837A (en) * 2021-04-13 2021-06-11 中国人民武装警察部队警官学院 Target recognition federal deep learning method based on trusted network
CN113268760A (en) * 2021-07-19 2021-08-17 浙江数秦科技有限公司 Distributed data fusion platform based on block chain

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109194507A (en) * 2018-08-24 2019-01-11 曲阜师范大学 The protection privacy neural net prediction method of non-interactive type
JP2020052813A (en) * 2018-09-27 2020-04-02 Kddi株式会社 Neural network, information addition device, learning method, information addition method and program
CN109299306A (en) * 2018-12-14 2019-02-01 央视国际网络无锡有限公司 Image search method and device
CN110222604A (en) * 2019-05-23 2019-09-10 复钧智能科技(苏州)有限公司 Target identification method and device based on shared convolutional neural networks
CN112182649A (en) * 2020-09-22 2021-01-05 上海海洋大学 Data privacy protection system based on safe two-party calculation linear regression algorithm
CN112116478A (en) * 2020-09-28 2020-12-22 中国建设银行股份有限公司 Method and device for processing suspicious bank anti-money-laundering report
CN112183730A (en) * 2020-10-14 2021-01-05 浙江大学 Neural network model training method based on shared learning
CN112883387A (en) * 2021-01-29 2021-06-01 南京航空航天大学 Privacy protection method for machine-learning-oriented whole process
CN112949837A (en) * 2021-04-13 2021-06-11 中国人民武装警察部队警官学院 Target recognition federal deep learning method based on trusted network
CN113268760A (en) * 2021-07-19 2021-08-17 浙江数秦科技有限公司 Distributed data fusion platform based on block chain

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MOON JUNSEOB: ""Online Money Laundering in the U. S.: Recent Cases, the Related Law, and Psychological Deterrence of the Money-Launderer"", KOREAN CRIMINAL PSYCHOLOGY REVIEW, vol. 11, no. 1, 23 June 2016 (2016-06-23) *
赵春雨, 韩学军, 丛恒斌, 闻邦椿: "一种新型BP神经网络的生成算法", 黄金学报, no. 02, 30 June 1999 (1999-06-30) *
赵睿斌;杨绍亮;王毛路;程浩;: "基于商密体系的政务链解决数据安全共享交换的研究", 信息安全与通信保密, no. 05, 10 May 2018 (2018-05-10) *

Similar Documents

Publication Publication Date Title
CN110084377A (en) Method and apparatus for constructing decision tree
CN114401079B (en) Multi-party united information value calculation method, related equipment and storage medium
CN113689003B (en) Mixed federal learning framework and method for safely removing third party
Ni et al. A vertical federated learning framework for graph convolutional network
CN111652732A (en) Bit currency abnormal transaction entity identification method based on transaction graph matching
CN102834809A (en) Input device
CN113420335B (en) Block chain-based federal learning system
CN114492605A (en) Federal learning feature selection method, device and system and electronic equipment
CN117216788A (en) Video scene identification method based on federal learning privacy protection of block chain
CN113792339A (en) Bidirectional privacy secret neural network model sharing method
Din et al. Swarmtrust: A swarm optimization-based approach to enhance trustworthiness in smart homes
CN116186629A (en) Financial customer classification and prediction method and device based on personalized federal learning
Kortoçi et al. Federated Split GANs
CN116861991A (en) Federal decision tree training method based on random sampling and multi-layer splitting
CN110493044A (en) A kind of method and system of quantifiable Situation Awareness
CN114422105A (en) Joint modeling method and device, electronic equipment and storage medium
Pan et al. 2SFGL: a simple and robust protocol for graph-based fraud detection
Liu et al. Federated Digital Gateway: Methodologies, Tools, and Applications
Aweke et al. Machine Learning based Network Security in Healthcare System
Yang et al. TAPESTRY: A blockchain based service for trusted interaction online
Medley et al. Collaborative verifiable delay functions
Yang et al. Research on the security sharing model of power grid data based on federated learning
CN113792311A (en) Neural network model sharing method based on block chain
Ogiela et al. Cognitive Systems for Service Management in Cloud Computing
Xing et al. Distributed Model Interpretation for Vertical Federated Learning with Feature Discrepancy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination