CN113792339B - Neural network model sharing method for two-way privacy confidentiality - Google Patents

Neural network model sharing method for two-way privacy confidentiality Download PDF

Info

Publication number
CN113792339B
CN113792339B CN202111052963.2A CN202111052963A CN113792339B CN 113792339 B CN113792339 B CN 113792339B CN 202111052963 A CN202111052963 A CN 202111052963A CN 113792339 B CN113792339 B CN 113792339B
Authority
CN
China
Prior art keywords
coefficient
model
cooperative
reserved
connection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111052963.2A
Other languages
Chinese (zh)
Other versions
CN113792339A (en
Inventor
张金琳
俞学劢
高航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Shuqin Technology Co Ltd
Original Assignee
Zhejiang Shuqin Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Shuqin Technology Co Ltd filed Critical Zhejiang Shuqin Technology Co Ltd
Priority to CN202111052963.2A priority Critical patent/CN113792339B/en
Publication of CN113792339A publication Critical patent/CN113792339A/en
Application granted granted Critical
Publication of CN113792339B publication Critical patent/CN113792339B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioethics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to the technical field of machine learning, in particular to a two-way privacy secret neural network model sharing method, which comprises the following steps: establishing a cooperative node; splitting into two connections, respectively noted as reserved connection and cooperative connection; deleting the neurons of the input layer to obtain a sharing model; the model party sends the sharing model to the cooperative node; the data source side generates a cooperation coefficient k1 and a retention coefficient k2; after receiving the medal, the model party distributes the cooperation right coefficient for the cooperation connection; calculating a retention weight coefficient and sending the retention weight coefficient to a data source side; the data source side sends the collaboration value to the collaboration node; the data source side sends the product of the reserved value and the reserved weight coefficient to the cooperative node; after the product of all the reserved values and the reserved weight coefficients is obtained, the cooperative node calculates a shared model, obtains a predicted result of the shared model, and sends the predicted result to a data source side. The invention has the following substantial effects: the privacy of the neural network model is maintained, and the privacy of data is also protected.

Description

Neural network model sharing method for two-way privacy confidentiality
Technical Field
The invention relates to the technical field of machine learning, in particular to a neural network model sharing method for two-way privacy confidentiality.
Background
Machine learning technology has become an important computer technology for the current profound changes in technology and economic development. Where neural networks are one of the important implementations of machine learning. Neural networks are complex network systems formed by a large number of simple processing units, called neurons, widely interconnected, reflecting many of the fundamental features of human brain function, and are highly complex nonlinear dynamic learning systems. Neural networks have massively parallel, distributed storage and processing, self-organizing, adaptive, and self-learning capabilities, and are particularly suited to address imprecise and ambiguous information processing issues that require consideration of many factors and conditions simultaneously. Neural networks have a broad and attractive prospect in the fields of system identification, pattern recognition, intelligent control and the like. Regardless of the form that machine learning takes, a large amount of quality data is a guarantee that machine learning can achieve higher accuracy. This is especially true for neural network models. While enterprises and institutions accumulate large amounts of data as informatization progresses, the distribution of such data is not uniform. For reasons of competition, data privacy protection and the like, the current enterprises often face the difficulty of lacking data when training a neural network model. Even if the problem of sample data source is solved, a plurality of enterprises spend a great deal of time training neural network models with the same functions due to the same business demands, and social resource waste is caused. There is thus a need to provide a method of sharing a neural network model that is capable of protecting data privacy and model privacy.
As in chinese patent CN110222604a, publication date 2019, month 9 and 10, a method for identifying object properties and determining objects quickly and efficiently while maintaining high accuracy, the method comprising the steps of: step S1, preprocessing an image to be detected to obtain a preprocessed image; step S2, inputting the preprocessed image into a multi-attribute recognition shared convolution network model to obtain each object to be judged in the image to be detected and object attributes of each object to be judged under attribute categories; and S3, judging a target object according to the object attribute and the preset attribute, wherein the multi-attribute identification shared convolution network model is composed of an object detection part for detecting the object to be judged from the preprocessed image and an object attribute extraction part for extracting the object attribute of the object to be judged, and the object attribute extraction part is trained by alternately training an attribute acquisition network model containing all connection layers corresponding to each attribute category. Although the sharing of the convolution network model is realized, the privacy of the convolution network model cannot be maintained, the privacy of the image to be detected cannot be maintained, and the method is not applicable to the service type related to the privacy data.
Disclosure of Invention
The invention aims to solve the technical problems that: at present, the technical problem of a neural network model sharing scheme capable of protecting privacy is lacking. A neural network model sharing method for two-way privacy confidentiality is provided.
In order to solve the technical problems, the invention adopts the following technical scheme: a neural network model sharing method for two-way privacy confidentiality comprises the following steps: establishing a cooperative node; splitting the connection of the neurons of the input layer into two connections by the model party, respectively marking the two connections as reserved connection and cooperative connection, and respectively marking the weights of the reserved connection and the cooperative connection as reserved weight coefficients and cooperative weight coefficients; deleting the neurons of the input layer, respectively establishing reserved input neurons and cooperative input neurons for reserved connection and cooperative connection, and obtaining a sharing model; the model party sends the sharing model to the cooperative node; the data source side generates a cooperation coefficient k1 and a retention coefficient k2, sends the cooperation coefficient k1 and the retention coefficient k2 to the model side, and transfers a plurality of tokens to an account of the model side; after receiving the medal, the model party distributes the cooperation right coefficient for the cooperation connection; calculating a retention weight coefficient according to the collaboration weight coefficient, the collaboration coefficient k1, the retention coefficient k2 and the weight coefficient of the original connection, and sending the retention weight coefficient to a data source side; the data source side multiplies the input number x by a cooperation coefficient k1 and then sends the multiplied value as a cooperation value to the cooperation node, and the cooperation node takes the cooperation value as a value of a cooperation input neuron; after the data source side multiplies the input number x by a retention coefficient k2, the data source side sends the product of the retention value and the retention weight coefficient to the cooperative node as a retention value; after obtaining the product of the reserved value and the reserved weight coefficient corresponding to the original connection of all the input layer neurons, the cooperative nodes calculate the sharing model, obtain the predicted result of the sharing model, and send the predicted result to the data source side.
Preferably, the data source side generates a collaboration coefficient ki1 and a reserve coefficient ki2 independently for each connection involved in the input layer, forming a coefficient pair (ki 1, ki 2), i representing the sequence number of the connection involved in the input layer; transmitting the set of coefficient pairs (ki 1, ki 2) to the model party; the model side generates a corresponding cooperative weight coefficient wi_c for each coefficient pair (ki 1, ki 2), calculates a reserved weight coefficient wi_r, and sends a set of reserved weight coefficients wi_r to the data source side.
Preferably, the model side establishes a history table, and records the received collaboration coefficient ki1, the reservation coefficient ki2, the correspondingly generated collaboration weight coefficient wi_c and the reservation weight coefficient wi_r for each connection i related to the input layer; when the same collaboration coefficient ki1 and retention coefficient ki2 of the connection i recorded in the history table are received again, wi_c which is the same as that recorded in the history table is generated for the collaboration connection; and sending the retention weight coefficient wi_r recorded in the history table to the data source side.
Preferably, the model side adds a random interference quantity to the original connection weight coefficient, the ratio of the interference quantity to the original connection weight coefficient is smaller than a preset threshold value, and the retention weight coefficient is calculated according to the collaboration weight coefficient, the collaboration coefficient k1, the retention coefficient k2 and the original connection weight coefficient added with the interference quantity and is sent to the data source side.
Preferably, a plurality of cooperative nodes are established, and the number of the cooperative nodes is matched with the number of layers of the neural network model; each layer of the sharing model is sent to the corresponding cooperative node; the product of the reserved value and the reserved weight coefficient is sent to a cooperative node corresponding to the input layer; after the collaboration node obtains the output of each layer of neurons of the sharing model, the output is sent to the collaboration node corresponding to the next layer of neurons.
A neural network model sharing method for two-way privacy confidentiality comprises the following steps: establishing a cooperative node; splitting the connection of the neurons of the input layer into a plurality of connections by a model party, respectively marking the connections as reserved connections and a plurality of cooperative connections, and respectively marking the weights of the reserved connections and the cooperative connections as reserved weight coefficients and cooperative weight coefficients; deleting the neurons of the input layer, respectively establishing reserved input neurons and cooperative input neurons for reserved connection and cooperative connection, and obtaining a sharing model; the model party sends the sharing model to the cooperative node; the data source side generates a plurality of cooperation coefficients kj1 and retention coefficients k2, sends the cooperation coefficients kj1 and the retention coefficients k2 to the model side, and transfers a plurality of tokens to an account of the model side; after receiving the medal, the model side distributes a collaboration weight coefficient w_jc for collaboration connection; according to the cooperative weight coefficient, the cooperative coefficient kj1, the reserved coefficient k2 and the weight coefficient of the original connection, a reserved weight coefficient w_r is calculated and sent to a data source side; the data source side multiplies the input number x by a cooperation coefficient kj1 and then sends the multiplied value as a cooperation value to the cooperation nodes, and the cooperation nodes respectively take the cooperation value as a value of a corresponding cooperation input neuron; after the data source side multiplies the input number x by a retention coefficient k2, the data source side sends the product of the retention value and a retention weight coefficient w_r to a selected cooperative node as a retention value; all the cooperative nodes send products of the cooperative values and the corresponding cooperative weight coefficients w_jc to the selected cooperative nodes; and selecting a cooperative node to calculate a sharing model, obtaining a sharing model prediction result, and sending the prediction result to a data source side.
Preferably, the model side establishes a history table and records the received collaboration coefficient k1, the retention coefficient k2, the correspondingly generated collaboration weight coefficient w_jc and the retention weight coefficient w_r; when the same collaboration coefficient k1 and the same reservation coefficient k2 recorded in the history table are received again, generating a collaboration right coefficient w_jc which is the same as that recorded in the history table for collaboration connection; and transmitting the retention weight coefficient w_r recorded in the history table to the data source side.
The invention has the following substantial effects: the input layer weight of the original neural network model is hidden by splitting the connection of the input layer neurons into two connections, and the equivalent weight coefficient is formed by the action of the cooperation coefficient, the reservation coefficient, the cooperation weight coefficient and the reservation weight coefficient of the two connections, so that the data source side cannot obtain the complete neural network model, and the data of the data source side is split into the cooperation value and the reservation value, so that the data is also hidden at the same time, and in the sharing process of the neural network model, the privacy of the neural network model is maintained, and the privacy of the data is also protected.
Drawings
Fig. 1 is a flowchart illustrating a neural network model sharing method according to an embodiment.
Fig. 2 is a schematic diagram of a neural network model to be shared according to an embodiment.
Fig. 3 is a schematic diagram of a neuron connection according to an embodiment.
FIG. 4 is a schematic diagram of a sharing model according to an embodiment.
Fig. 5 is a flowchart illustrating a neural network model sharing method according to the second embodiment.
Fig. 6 is a schematic diagram illustrating connection splitting according to the second embodiment.
Wherein: 11. neural network model 12, shared model 13, reserved connection 14, cooperative connection.
Detailed Description
The following description of the embodiments of the present invention will be made with reference to the accompanying drawings.
Embodiment one:
referring to fig. 1, a method for sharing a neural network model with two-way privacy security includes the following steps:
Step A01) establishing a cooperative node;
Step A02), the model divides the connection of the neurons of the input layer into two connections, which are respectively marked as reserved connection and cooperative connection, and the weights of the reserved connection and the cooperative connection are respectively marked as reserved weight coefficient and cooperative weight coefficient;
step A03), deleting neurons of an input layer, and respectively establishing reserved input neurons and cooperative input neurons for reserved connection and cooperative connection to obtain a sharing model;
step A04) the model party sends the sharing model to the cooperative node;
Step A05), the data source side generates a cooperation coefficient k1 and a reserved coefficient k2, the cooperation coefficient k1 and the reserved coefficient k2 are sent to the model side, and a plurality of tokens are transferred to an account of the model side;
step A06) after the model party receives the medal, distributing the cooperation right coefficient for the cooperation connection;
step A07) calculating a retention weight coefficient according to the collaboration weight coefficient, the collaboration coefficient k1, the retention coefficient k2 and the weight coefficient of the original connection, and sending the retention weight coefficient to a data source side;
Step A08) the data source side multiplies the input number x by a cooperation coefficient k1 and then sends the result as a cooperation value to the cooperation node, and the cooperation node takes the cooperation value as a value of a cooperation input neuron;
Step A09), the data source side multiplies the input number x by a reserved coefficient k2 and then uses the reserved value as a reserved value, and the data source side sends the product of the reserved value and the reserved weight coefficient to the cooperative node;
And A10) after obtaining the products of the reserved values and the reserved weight coefficients corresponding to the original connections of all the neurons of the input layer, the cooperative nodes calculate the sharing model to obtain a predicted result of the sharing model, and the predicted result is sent to the data source side.
The data source side independently generates a cooperation coefficient ki1 and a reserved coefficient ki2 for each connection related to the input layer, and a coefficient pair (ki 1, ki 2) is formed, wherein i represents the serial number of the connection related to the input layer; transmitting the set of coefficient pairs (ki 1, ki 2) to the model party; the model side generates a corresponding cooperative weight coefficient wi_c for each coefficient pair (ki 1, ki 2), calculates a reserved weight coefficient wi_r, and sends a set of reserved weight coefficients wi_r to the data source side.
Referring to fig. 2, a neural network model 11 shared in this embodiment has two hidden layers, the input layer is generally referred to as layer 0, the hidden layers are referred to as layer 1 and layer 2, and the last layer is the output layer. The output layer has two neurons, i.e. is able to output two fields. The input layer has 3 neurons, involving 3 input numbers, denoted as x1, x2 and x3, respectively. Wherein the input value of the hidden layer is a weighted sum of the values of the connected input layer neurons, substituted into the result of the activation function. Output of the 1 st neuron as layer 1: y1= Sigmod (Σw1i×xi+b1), and reference numeral 1 denotes a 1 st neuron of the 1 st layer, and the value of i is 1 to 3. The Sigmod function is the activation function of the usual neural network model 11.
Taking the 1 st neuron of layer 1 as an example, it involves 3 connections, as shown in fig. 3. The input layer is respectively connected with 3 neurons, the corresponding weight coefficients are w11, w12 and w13, and the 3 connections related to the input layer are respectively split into 2 connections. As shown in fig. 4, the 3 connections involved in the 1 st neuron of layer 1 are split into reserved connection 13 and cooperative connection 14. So that the connection of the 1 st neuron of the 1 st layer is changed from 3 to 6, the corresponding output y1= Sigmod (Σw1_ri×xi_r+Σw1_ci×xi_c+b1). Taking the 1 st neuron of the input layer as an example, the original input is x1, and the number of 1 st neurons participating in the 1 st layer is equal to x1×w11. When splitting into reserved connection 13 and cooperative connection 14, the number of 1 st neurons participating in layer 1 of the cooperative weight coefficient w1_c1, the cooperative coefficient k1, the reserved coefficient k2 and the reserved weight coefficient w1_r1 is equal to x1_r1_r1+x1_c1_w1=k2_x1_w1+k1_w1_w1_c1, the equivalent weight coefficient is k2_w1+k1_w1_w1_c1, and the equivalent weight coefficient is equal to the weight coefficient w11 of the original connection, namely: w11=k2×w1_r1+k1×w1_c1. The collaboration coefficient k1 and the retention coefficient k2 are generated by a data source side, and the collaboration weight coefficient w1_c1 is generated by a model side. The retention weight coefficient w1_r1 is calculated from the equation w11=k2×w1_r1+k1×w1_c1.
Since the model side does not know the collaboration value, i.e. the value of x1, the model side cannot know the specific value of x1, so that the input data is kept secret. Similarly, the data source side does not know the cooperative weight coefficient w1_c1, so that the equivalent weight coefficient, namely the weight coefficient w11 of the original connection, cannot be obtained, and part of the connection weight coefficient of the neural network model 11 is kept secret. The data source side can only use once, so that the model side can apply for the retention weight coefficient w1_r1 once. If the model side does not provide the retention factor w1_r1, the data source side will not immediately be able to obtain the prediction result using the shared model 12 correctly. The data source side cannot be separated from the model side, and the sharing model 12 is independently used. Meanwhile, the model party can obtain benefits by sharing the neural network model 11 each time. If w1i=k2×w1_ri+k1×w1_ci, then Sigmod (Σw1_ri×xi_r+Σw1_ci_c+b1) = Sigmod (Σw1_ri×k2×xi+Σw1_ci_k1×xi+b1) = Sigmod (Σw1i×xi+b1) can be established, i.e., after splitting the connection into the cooperative connection 14 and the reserved connection 13, the result predicted by the neural network model 11 is not affected. The calculation of the retention weight coefficient only involves addition and multiplication, and homomorphic encryption technology of addition and multiplication belongs to the prior art in the field, and is not described herein in detail.
If the value x1=16 of the input neuron, the weight coefficient of the original connection is w11=0.9, and after being split into the cooperative connection 14 and the reserved connection 13, the data source side generates the cooperative coefficient k1=0.6 and the reserved coefficient k2=0.8. The model generates a collaboration weight coefficient w1_c1=0.5 for collaboration connection 14, then according to the equation: w11=k2×w1_r1+k1×w1_c1, i.e.: 0.9 The retention factor w1_r1+0.6x0.5 is obtained by=0.8xw1_r1=0.75, and the retention factor 0.75 is sent to the data source side. The weighted sum of the collaboration value and the reserve value is: 16×0.8×0.75+16×0.6×0.5=14.4=16×0.9, just equivalent to the effect of the original connection.
The model side establishes a history table, and records the received collaboration coefficient ki1, the reserved coefficient ki2, the correspondingly generated collaboration weight coefficient wi_c and the reserved weight coefficient wi_r for each connection i related to the input layer; when the same collaboration coefficient ki1 and retention coefficient ki2 of the connection i recorded in the history table are received again, wi_c which is the same as that recorded in the history table is generated for the collaboration connection; and sending the retention weight coefficient wi_r recorded in the history table to the data source side.
The model side adds a random interference quantity to the original connection weight coefficient, the ratio of the interference quantity to the original connection weight coefficient is smaller than a preset threshold value, and the retention weight coefficient is calculated according to the collaboration weight coefficient, the collaboration coefficient k1, the retention coefficient k2 and the original connection weight coefficient added with the interference quantity and is sent to the data source side. The data source side cannot establish a multi-element equation set by calling the sharing model 12 for a plurality of times, and equivalent weight coefficients are solved. By referring to the history, if the scaling coefficient k is the same, the same retention weight coefficient is returned, so that a plurality of polynary equations are the same, an equation set cannot be constructed, and the privacy security of the sharing model 12 is ensured.
Preferably, a plurality of cooperative nodes are established in the embodiment, and the number of the cooperative nodes is matched with the number of layers of the neural network model; each layer of the sharing model is sent to the corresponding cooperative node; the product of the reserved value and the reserved weight coefficient is sent to a cooperative node corresponding to the input layer; after the collaboration node obtains the output of each layer of neurons of the sharing model, the output is sent to the collaboration node corresponding to the next layer of neurons. The privacy of the neural network model can be further protected.
According to the embodiment, the connection of the neurons of the input layer is split into two connections, the equivalent weight coefficient is formed by the action of the cooperation coefficient, the reservation coefficient, the cooperation weight coefficient and the reservation weight coefficient of the two connections, so that the input layer weight of the original neural network model is hidden, the data source side cannot obtain a complete neural network model, the data of the data source side is split into the cooperation value and the reservation value, the data is also hidden at the same time, and in the sharing process of the neural network model, the privacy of the neural network model is maintained, and the privacy of the data is also protected.
Embodiment two:
compared with the first embodiment, the embodiment provides a further improvement on the splitting of the connection. Referring to fig. 5, the method comprises the following steps:
Step B01), establishing a cooperative node;
Step B02), the model divides the connection of the neurons of the input layer into a plurality of connections, which are respectively marked as reserved connections and a plurality of cooperative connections, and the weights of the reserved connections and the cooperative connections are respectively marked as reserved weight coefficients and cooperative weight coefficients;
Step B03), deleting neurons of an input layer, and respectively establishing reserved input neurons and cooperative input neurons for reserved connection and cooperative connection to obtain a sharing model;
step B04) the model party sends the sharing model to the cooperative node;
Step B05), the data source side generates a plurality of cooperation coefficients kj1 and retention coefficients k2, the cooperation coefficients kj1 and the retention coefficients k2 are sent to the model side, and accounts of the plurality of tokens to the model side are transferred;
Step B06) after the model party receives the medal, the cooperation right coefficient w_jc is allocated for the cooperation connection;
step B07), calculating a retention weight coefficient w_r according to the collaboration weight coefficient, the collaboration coefficient kj1, the retention coefficient k2 and the weight coefficient of the original connection, and sending the retention weight coefficient w_r to a data source side;
step B08) the data source side multiplies the input number x by a cooperation coefficient kj1 and then sends the multiplied value x to the cooperation node as a cooperation value, and the cooperation node respectively takes the cooperation value as a value of a corresponding cooperation input neuron;
step B09), the data source side multiplies the input number x by a reserved coefficient k2 and then uses the reserved value as a reserved value, and the data source side sends the product of the reserved value and the reserved weight coefficient w_r to the selected cooperative node;
And B10) all the cooperative nodes send the product of the cooperative value and the corresponding cooperative weight coefficient w_jc to the selected cooperative node, the selected cooperative node calculates a sharing model, a sharing model prediction result is obtained, and the prediction result is sent to a data source side.
The difference between this embodiment and the first embodiment is that the first embodiment defines splitting the connection into 2 connections, and this embodiment adopts splitting the connection into 2 or more connections. As shown in fig. 6, the connection is split into 4 connections, 1 reserved connection 13 and 3 cooperative connections 14, respectively. Since splitting into multiple connections, the steps performed are the same as 2 connections, except that multiple allocations of collaboration coefficients and collaboration values are required.
Embodiment III:
A neural network model sharing method of two-way privacy confidentiality is characterized in that a bank armor uses service data owned by the bank armor to train a neural network model 11 with a money laundering function, namely, according to account flow data of a depositor, the prediction probability of money laundering activities of the depositor account is output. The bank formates are longer in time and larger in service scale, so that enough data are available to train the prediction accuracy of the neural network model 11 with the money back flushing function to meet the use requirement. And the bank B has smaller scale and insufficient data volume, and the neural network model 11 meeting the requirements is difficult to independently train. But bank b also needs to accept the task of back money laundering. Similarly, more banks similar to bank b, with smaller scale, independently build and train the neural network model 11 with the money back-flushing function are faced with the difficulty of insufficient data volume. Therefore, the first bank and the second bank adopt the scheme recorded in the embodiment to share the neural network model 11, and the problem that the accuracy and the efficiency of executing the money back-flushing task by the second bank are low is solved.
The back money laundering model cannot be leaked, otherwise, an lawbreaker can operate a saving account aiming at the back money laundering model to try to avoid the identification of the back money laundering model. The bank armor splits the connection related to the input layer of the trained neural network model 11 into a cooperative connection 14 and a reserved connection 13, establishes a cooperative input neuron and a reserved input neuron for the cooperative connection 14 and the reserved connection 13 respectively, and deletes the original input layer. And respectively transmitting the cooperative input neuron and the reserved input neuron to the cooperative node and the bank B. And the bank B generates a proportional coefficient k for each input number, sends the homomorphic encrypted proportional coefficient k to the intelligent contract, and firstly generates a collaboration weight coefficient after the bank A obtains the homomorphic encrypted proportional coefficient k. And then the collaboration weight coefficient is sent to the collaboration node. Homomorphic encryption is carried out on the collaboration right coefficient. And under homomorphic encryption, calculating to obtain homomorphic encryption values of the retention weight coefficients. And sending the homomorphic encryption value of the retention weight coefficient to the intelligent contract. After the intelligent contract is decrypted, the public key of the bank B is used for encryption, and the intelligent contract is sent to the bank B. And after the bank B obtains the retention weight coefficient, calculating to obtain a cooperation value and a retention value. And sending the cooperation value to the cooperation node, and sending the product of the reserved value and the reserved weight coefficient to the cooperation node. At this time, the cooperative nodes obtain enough information, and can continue to calculate the sharing model 12, so as to obtain a prediction result of the sharing model 12, namely, probability of money laundering activities of the depositors corresponding to the input data of the bank B, and provide guidance for money laundering tasks.
The above-described embodiment is only a preferred embodiment of the present invention, and is not limited in any way, and other variations and modifications may be made without departing from the technical aspects set forth in the claims.

Claims (6)

1. A neural network model sharing method for two-way privacy confidentiality is characterized in that,
The method comprises the following steps:
Establishing a cooperative node;
splitting the connection of the neurons of the input layer into two connections by the model party, respectively marking the two connections as reserved connection and cooperative connection, and respectively marking the weights of the reserved connection and the cooperative connection as reserved weight coefficients and cooperative weight coefficients;
deleting the neurons of the input layer, respectively establishing reserved input neurons and cooperative input neurons for reserved connection and cooperative connection, and obtaining a sharing model;
The model party sends the sharing model to the cooperative node;
the data source side generates a cooperation coefficient k1 and a retention coefficient k2, sends the cooperation coefficient k1 and the retention coefficient k2 to the model side, and transfers a plurality of tokens to an account of the model side;
After receiving the medal, the model party distributes the cooperation right coefficient for the cooperation connection;
Calculating a retention weight coefficient according to the collaboration weight coefficient, the collaboration coefficient k1, the retention coefficient k2 and the weight coefficient of the original connection, and sending the retention weight coefficient to a data source side;
The data source side multiplies the input number x by a cooperation coefficient k1 and then sends the multiplied value as a cooperation value to the cooperation node, and the cooperation node takes the cooperation value as a value of a cooperation input neuron;
After the data source side multiplies the input number x by a retention coefficient k2, the data source side sends the product of the retention value and the retention weight coefficient to the cooperative node as a retention value;
After obtaining the products of the reserved values and the reserved weight coefficients corresponding to the original connections of all the input layer neurons, the cooperative nodes calculate a shared model, obtain a predicted result of the shared model, and send the predicted result to a data source side;
the weight coefficient of the original connection=the retention weight coefficient k2+the collaboration weight coefficient k1;
Establishing a plurality of cooperative nodes, wherein the number of the cooperative nodes is matched with the number of layers of the neural network model;
each layer of the sharing model is sent to the corresponding cooperative node;
the product of the reserved value and the reserved weight coefficient is sent to a cooperative node corresponding to the input layer;
after the collaboration node obtains the output of each layer of neurons of the sharing model, the output is sent to the collaboration node corresponding to the next layer of neurons.
2. The method for sharing a neural network model with two-way privacy security as claimed in claim 1,
The data source side independently generates a cooperation coefficient ki1 and a reserved coefficient ki2 for each connection related to the input layer, and a coefficient pair (ki 1, ki 2) is formed, wherein i represents the serial number of the connection related to the input layer;
Transmitting the set of coefficient pairs (ki 1, ki 2) to the model party;
The model side generates a corresponding cooperative weight coefficient wi_c for each coefficient pair (ki 1, ki 2), calculates a reserved weight coefficient wi_r, and sends a set of reserved weight coefficients wi_r to the data source side.
3. The method for sharing a neural network model with two-way privacy security as claimed in claim 2, wherein,
The model side establishes a history table, and records the received collaboration coefficient ki1, the reserved coefficient ki2, the correspondingly generated collaboration weight coefficient wi_c and the reserved weight coefficient wi_r for each connection i related to the input layer;
When the same collaboration coefficient ki1 and retention coefficient ki2 of the connection i recorded in the history table are received again, wi_c which is the same as that recorded in the history table is generated for the collaboration connection;
and sending the retention weight coefficient wi_r recorded in the history table to the data source side.
4. A method for sharing a two-way privacy-preserving neural network model as claimed in any one of claims 1 to 3,
The model side adds a random interference quantity to the original connection weight coefficient, the ratio of the interference quantity to the original connection weight coefficient is smaller than a preset threshold value, and the retention weight coefficient is calculated according to the collaboration weight coefficient, the collaboration coefficient k1, the retention coefficient k2 and the original connection weight coefficient added with the interference quantity and is sent to the data source side.
5. A neural network model sharing method for two-way privacy confidentiality is characterized in that,
The method comprises the following steps:
Establishing a cooperative node;
Splitting the connection of the neurons of the input layer into a plurality of connections by a model party, respectively marking the connections as reserved connections and a plurality of cooperative connections, and respectively marking the weights of the reserved connections and the cooperative connections as reserved weight coefficients and cooperative weight coefficients;
deleting the neurons of the input layer, respectively establishing reserved input neurons and cooperative input neurons for reserved connection and cooperative connection, and obtaining a sharing model;
The model party sends the sharing model to the cooperative node;
the data source side generates a plurality of cooperation coefficients kj1 and retention coefficients k2, sends the cooperation coefficients kj1 and the retention coefficients k2 to the model side, and transfers a plurality of tokens to an account of the model side;
After receiving the medal, the model side distributes a collaboration weight coefficient w_jc for collaboration connection;
According to the cooperative weight coefficient, the cooperative coefficient kj1, the reserved coefficient k2 and the weight coefficient of the original connection, a reserved weight coefficient w_r is calculated and sent to a data source side;
The data source side multiplies the input number x by a cooperation coefficient kj1 and then sends the multiplied value as a cooperation value to the cooperation nodes, and the cooperation nodes respectively take the cooperation value as a value of a corresponding cooperation input neuron;
after the data source side multiplies the input number x by a retention coefficient k2, the data source side sends the product of the retention value and a retention weight coefficient w_r to a selected cooperative node as a retention value;
all the cooperative nodes send products of the cooperative values and the corresponding cooperative weight coefficients w_jc to the selected cooperative nodes;
selecting a cooperative node to calculate a sharing model, obtaining a sharing model prediction result, and sending the prediction result to a data source side;
the weight coefficient of the original connection=the retention coefficient k2+the collaboration coefficient kj 1;
Establishing a plurality of cooperative nodes, wherein the number of the cooperative nodes is matched with the number of layers of the neural network model;
each layer of the sharing model is sent to the corresponding cooperative node;
the product of the reserved value and the reserved weight coefficient is sent to a cooperative node corresponding to the input layer;
after the collaboration node obtains the output of each layer of neurons of the sharing model, the output is sent to the collaboration node corresponding to the next layer of neurons.
6. The method for sharing a neural network model with two-way privacy security as claimed in claim 5,
The model side establishes a history table and records the received collaboration coefficient k1, the retention coefficient k2, the correspondingly generated collaboration weight coefficient w_jc and the retention weight coefficient w_r;
When the same collaboration coefficient k1 and the same reservation coefficient k2 recorded in the history table are received again, generating a collaboration right coefficient w_jc which is the same as that recorded in the history table for collaboration connection;
And transmitting the retention weight coefficient w_r recorded in the history table to the data source side.
CN202111052963.2A 2021-09-09 2021-09-09 Neural network model sharing method for two-way privacy confidentiality Active CN113792339B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111052963.2A CN113792339B (en) 2021-09-09 2021-09-09 Neural network model sharing method for two-way privacy confidentiality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111052963.2A CN113792339B (en) 2021-09-09 2021-09-09 Neural network model sharing method for two-way privacy confidentiality

Publications (2)

Publication Number Publication Date
CN113792339A CN113792339A (en) 2021-12-14
CN113792339B true CN113792339B (en) 2024-06-14

Family

ID=79182799

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111052963.2A Active CN113792339B (en) 2021-09-09 2021-09-09 Neural network model sharing method for two-way privacy confidentiality

Country Status (1)

Country Link
CN (1) CN113792339B (en)

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109194507B (en) * 2018-08-24 2022-02-18 曲阜师范大学 Non-interactive privacy protection neural network prediction method
JP6953376B2 (en) * 2018-09-27 2021-10-27 Kddi株式会社 Neural networks, information addition devices, learning methods, information addition methods, and programs
CN109299306B (en) * 2018-12-14 2021-09-07 央视国际网络无锡有限公司 Image retrieval method and device
CN110222604B (en) * 2019-05-23 2023-07-28 复钧智能科技(苏州)有限公司 Target identification method and device based on shared convolutional neural network
CN112182649B (en) * 2020-09-22 2024-02-02 上海海洋大学 Data privacy protection system based on safe two-party calculation linear regression algorithm
CN112116478A (en) * 2020-09-28 2020-12-22 中国建设银行股份有限公司 Method and device for processing suspicious bank anti-money-laundering report
CN112183730B (en) * 2020-10-14 2022-05-13 浙江大学 Neural network model training method based on shared learning
CN112883387A (en) * 2021-01-29 2021-06-01 南京航空航天大学 Privacy protection method for machine-learning-oriented whole process
CN112949837B (en) * 2021-04-13 2022-11-11 中国人民武装警察部队警官学院 Target recognition federal deep learning method based on trusted network
CN113268760B (en) * 2021-07-19 2021-11-02 浙江数秦科技有限公司 Distributed data fusion platform based on block chain

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Online Money Laundering in the U. S.: Recent Cases, the Related Law, and Psychological Deterrence of the Money-Launderer";Moon Junseob;Korean Criminal Psychology Review;20160623;第11卷(第1期);全文 *
基于商密体系的政务链解决数据安全共享交换的研究;赵睿斌;杨绍亮;王毛路;程浩;;信息安全与通信保密;20180510(第05期);全文 *

Also Published As

Publication number Publication date
CN113792339A (en) 2021-12-14

Similar Documents

Publication Publication Date Title
US20230078061A1 (en) Model training method and apparatus for federated learning, device, and storage medium
CN107766540A (en) A kind of block chain network of subregion and its method for realizing partitioned storage
CN113689003B (en) Mixed federal learning framework and method for safely removing third party
Torres Berru et al. Artificial intelligence techniques to detect and prevent corruption in procurement: A systematic literature review
CN114677200B (en) Business information recommendation method and device based on multiparty high-dimension data longitudinal federation learning
CN113420335B (en) Block chain-based federal learning system
CN113420232A (en) Privacy protection-oriented graph neural network federal recommendation method
CN113449048A (en) Data label distribution determining method and device, computer equipment and storage medium
CN114372871A (en) Method and device for determining credit score value, electronic device and storage medium
CN114492605A (en) Federal learning feature selection method, device and system and electronic equipment
CN110610098A (en) Data set generation method and device
Khaldi et al. Forecasting of bitcoin daily returns with eemd-elman based model
CN114282692A (en) Model training method and system for longitudinal federal learning
CN114036581A (en) Privacy calculation method based on neural network model
CN113792339B (en) Neural network model sharing method for two-way privacy confidentiality
Manisha et al. CBRC: a novel approach for cancelable biometric template generation using random permutation and Chinese Remainder Theorem
Breskuvienė et al. Categorical feature encoding techniques for improved classifier performance when dealing with imbalanced data of fraudulent transactions
CN113259084A (en) Method and device for pre-warning of mortgage risk of movable property, computer equipment and storage medium
Kortoçi et al. Federated split gans
CN116186629A (en) Financial customer classification and prediction method and device based on personalized federal learning
Pan et al. 2SFGL: a simple and robust protocol for graph-based fraud detection
Sherly et al. A improved incremental and interactive frequent pattern mining techniques for market basket analysis and fraud detection in distributed and parallel systems
Liu et al. Federated Digital Gateway: Methodologies, Tools, and Applications
Sahoo et al. Perturbation-Based Fuzzified K-Mode Clustering Method for Privacy Preserving Recommender System
Aweke et al. Machine Learning based Network Security in Healthcare System

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant