CN116545734A - Matrix decomposition method based on security aggregation and key exchange - Google Patents
Matrix decomposition method based on security aggregation and key exchange Download PDFInfo
- Publication number
- CN116545734A CN116545734A CN202310620692.9A CN202310620692A CN116545734A CN 116545734 A CN116545734 A CN 116545734A CN 202310620692 A CN202310620692 A CN 202310620692A CN 116545734 A CN116545734 A CN 116545734A
- Authority
- CN
- China
- Prior art keywords
- client
- matrix
- gradient
- representing
- embedding
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000011159 matrix material Substances 0.000 title claims abstract description 71
- 238000000034 method Methods 0.000 title claims abstract description 33
- 230000002776 aggregation Effects 0.000 title claims abstract description 18
- 238000004220 aggregation Methods 0.000 title claims abstract description 18
- 238000000354 decomposition reaction Methods 0.000 title abstract description 16
- 230000008569 process Effects 0.000 claims abstract description 8
- 230000006399 behavior Effects 0.000 claims description 4
- 238000012549 training Methods 0.000 abstract description 17
- 238000005516 engineering process Methods 0.000 abstract description 8
- 238000004364 calculation method Methods 0.000 abstract description 7
- 230000000873 masking effect Effects 0.000 abstract description 3
- 238000006116 polymerization reaction Methods 0.000 abstract description 3
- 230000002708 enhancing effect Effects 0.000 abstract 1
- 230000006870 function Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/06—Network architectures or network communication protocols for network security for supporting key management in a packet data network
- H04L63/061—Network architectures or network communication protocols for network security for supporting key management in a packet data network for key exchange, e.g. in peer-to-peer networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/08—Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
- H04L9/0816—Key establishment, i.e. cryptographic processes or cryptographic protocols whereby a shared secret becomes available to two or more parties, for subsequent use
- H04L9/0838—Key agreement, i.e. key establishment technique in which a shared key is derived by parties as a function of information contributed by, or associated with, each of these
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/08—Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
- H04L9/0861—Generation of secret information including derivation or calculation of cryptographic keys or passwords
- H04L9/0869—Generation of secret information including derivation or calculation of cryptographic keys or passwords involving random numbers or seeds
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/08—Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
- H04L9/0891—Revocation or update of secret information, e.g. encryption key update or rekeying
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/40—Network security protocols
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L2209/00—Additional information or applications relating to cryptographic mechanisms or cryptographic arrangements for secret or secure communication H04L9/00
- H04L2209/08—Randomization, e.g. dummy operations or using noise
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/50—Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate
Landscapes
- Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Computational Mathematics (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Optimization (AREA)
- Mathematical Analysis (AREA)
- Artificial Intelligence (AREA)
- Computer Hardware Design (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Evolutionary Computation (AREA)
- Algebra (AREA)
- Databases & Information Systems (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Storage Device Security (AREA)
Abstract
The invention discloses a matrix decomposition method based on secure aggregation and key exchange, which provides a new idea for enhancing data security for federal learning by performing secure aggregation on the gradient of a matrix I of a matrix decomposed object under a federal learning framework; can not be used locallyAnd a gradient of safe polymerizationEfficient use of training samples of the upper recommendation model (i.e., federal learning model) ensures that user data does not leave the local site, as wellThe recommendation model training process is safer; masking the gradient and noise, so that leakage of source data information caused by exposing the real gradient of the data is effectively avoided; compared with homomorphic encryption technology adopted in the background technology, the gradient summarization mode based on the security aggregation has the advantages that the calculation complexity of gradient encryption and decryption is lower, the calculation speed is faster, and the training speed of a recommended model is improved.
Description
Technical Field
The invention relates to the technical field of information processing, in particular to a matrix decomposition method based on secure aggregation and key exchange.
Background
The current security matrix decomposition algorithm is mainly based on a matrix decomposition distributed algorithm, ensures the security of transmission information through the Paillier homomorphic encryption and other encryption technologies, and avoids the local data leakage of users. The implementation steps of the existing security matrix decomposition algorithm mainly comprise:
1. the method comprises the steps that a server initializes an article matrix I, a client locally initializes respective user matrices U, a public key is shared between the server and the client, and a private key is shared only by the client;
2. the server side encrypts I by using public key to obtain ciphertext C I Broadcasting to all clients;
3. each client gets C I Then utilizing local private key pair C I Decrypting to obtain a real object matrix I, calculating the gradient of the U held by the client, updating the U, calculating the gradient G of the I after updating, and encrypting to obtain a ciphertext C G ;
4. C is collected to server G And update to get C I =C I -C G Then the updated C I Broadcasting to all clients;
5. repeating the steps 3-4 until the algorithm converges.
According to the steps 1-5, the existing scheme ensures that the user data cannot be local, the homomorphic encryption technology enables the server side to not obtain the plaintext of the gradient in the whole training process, so that the original data cannot be reversely pushed out from a single gradient, but the homomorphic encryption scheme needs repeated encryption and decryption to enable training to be not efficient, if homomorphic encryption is removed, the plaintext gradient of the single data is directly summarized, the original data can be reversely pushed out after multi-step training, the safety of the local data cannot be guaranteed, and therefore the technical problem of how to solve the existing safety matrix decomposition algorithm becomes a problem to be solved urgently in the industry.
Disclosure of Invention
The invention aims to make the recommended model training process more efficient and ensure that local data in model training is not leaked, and provides a matrix decomposition method based on secure aggregation and key exchange.
To achieve the purpose, the invention adopts the following technical scheme:
the matrix decomposition method based on the secure aggregation and the key exchange comprises the following steps:
s1, recording a dispatcher of a federal learning framework as a server, and each participating trainer as a client, wherein the server broadcasts an initialized embedded matrix I of an article to each client;
s2, each client X calculates an embedding matrix U related to the respective local user by using the embedding matrix I X Gradient of (2)And utilize->Updating an embedded matrix U of a local user X ;
S3, each client X uses the locally updated U X Calculating a gradient generated to the embedding matrix I
S4, updating the gradient by adopting a key exchange methodAnd is about->Summarizing to obtain->After that, use->Updating the embedded matrix I;
s5, repeating the steps S2-S4 until the termination condition of federal learning is reached.
Preferably, in step S2, the embedding matrixEmbedding vector of the associated local user i +.>Gradient of->Calculated by the following formula (1):
in the formula (1), L is a loss function of the client X for federal learning,
M X representing a scoring matrix at the client X;
I T is the matrix transpose of I;
||·|| F the Frobenius norm of the matrix;
I j ∈R 1×k an embedding vector representing an item j common to all clients is an embedding matrix i= [ I ] 1 ,I 2 ,...,I j ,...,I d ]∈R d×k Is the first of (2)j rows;
representation I j Is a vector transpose of (2);
representing the score of the user i about the item j owned by the client X (the missing item that the user i does not have an actual score about the item j is to be predicted after modeling is completed);
representing that the user i owned by the client X actually evaluates the excessive item j;
representing the sum of the items j actually evaluated by the user i owned by the client X with respect to the token j.
Preferably, in step S2, the user embedding matrix local to each of the clients X is updated by the following formula (2):
in the formula (2), lambda U Representing U X Is used for the regularization parameters of the (c),
preferably, in step S3, the embedding vector I of the associated item j in the embedding matrix I j Corresponding gradientCalculated by the following formula (3):
in the formula (3),representation->Is the j-th row of (2);
an embedded vector I representing the item j common to all the clients j Is a vector transpose of (2);
representing the embedding matrix U X An embedded vector of a related local user i;
a score representing the user i locally owned by the client X with respect to the item j;
representing those users i owned by the client X who have a scoring behavior on item j;
representing the summation of those users i owned by the client X who have scored the item j with respect to the token i.
Preferably, in step S4, the gradient is replacedThe key exchange method adopted specifically comprises the following steps:
s41, each client X locally generates a private key S X And public key p X The server exchanges the public key generated by each client X, and each client X obtains a corresponding exchange public key set which is marked as C X ;
S42 according to C X And a private key s locally generated by each of said clients X X Generating a key agreement between the client X and every other client Y, denoted as key_agreement (X, Y);
s43, the client X generates a mask by using the locally generated key_agreement (X, Y) as a seed, marks the mask (X, Y), and updates the gradient in step S3
Preferably, in step S41, C X Expressed by the following expression (4):
C X ={p 1 ,…,p X ,…,p N expression (4)
In the expression (4) of the above,representing a public key locally generated by the client X;
p represents prime numbers, and each client is pre-customized;
g represents the primitive root of the model p, and each client is pre-customized;
% p represents modulo arithmetic on prime number p;
{p 1 ,…,p X ,…,p N and represents a set of locally generated public keys for all N of the clients received by the server.
Preferably, in step S42, the key_flag (X, Y) is generated by:
the client X exchanges the public key set C from the client X X The public key p of the client Y is taken out Y ;
The client X is based on the public key p Y And the locally generated saidPrivate key s X Generated as key_agreement (X, Y).
Preferably, the generation formula of key_agreement (X, Y) is expressed as follows:
in the formula (5) of the present invention,represents p Y S of (2) X Power of the order;
p represents prime numbers agreed in advance by each client;
% p represents modulo the prime number p.
Preferably, in step S43, the gradient is updated by the following equation (6)
In formula (6), a (X, Y) represents 1 or-1, the clients are numbered {1,2, …, X, …, N }, which value is equal to 1 if the number of client X is greater than the number of client Y, and is otherwise equal to-1;
∑ Y∈{1,2,...,N}\{X} representing the summation of all non-X clients Y with respect to token Y.
Preferably, in step S4, the steps are summarizedIs expressed by the following formula (7):
in step S4, the method of updating the embedding matrix I is expressed by the following formula (8):
in the formula (8), lambda I Representing regularization parameters of the embedding matrix I.
Preferably, for the gradient generated in step S3After noise addition, the process goes to step S4, for the gradient +.>The noise adding method is expressed by the following formula (9):
in the formula (9), n X Representing gaussian noise.
The invention has the following beneficial effects:
1. gradient with safe polymerizationAnd->And a training sample of the recommendation model is obtained, so that the user data is ensured not to leave the local area, and the recommendation model training process is safer.
2. Masking the gradient and noise, so that leakage of source data information caused by exposing the real gradient of the data is effectively avoided;
3. compared with homomorphic encryption technology adopted in the background technology, the gradient summarization mode based on the security aggregation has the advantages that the calculation complexity of gradient encryption and decryption is lower, the calculation speed is higher, and the training speed of a recommended model is improved;
4. the recommendation model is trained based on the matrix decomposition algorithm provided by the application under the federal learning framework, and in the model training process, the participants do not need to exchange local data, so that the local data is more effectively ensured not to be leaked.
Drawings
In order to more clearly illustrate the technical solution of the embodiments of the present invention, the drawings that are required to be used in the embodiments of the present invention will be briefly described below. It is evident that the drawings described below are only some embodiments of the present invention and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is a diagram of steps for implementing a matrix decomposition method based on secure aggregation and key exchange according to an embodiment of the present invention;
fig. 2 is a flow chart of a matrix decomposition method based on secure aggregation and key exchange according to an embodiment of the present invention.
Detailed Description
The technical scheme of the invention is further described below by the specific embodiments with reference to the accompanying drawings.
Wherein the drawings are for illustrative purposes only and are shown in schematic, non-physical, and not intended to be limiting of the present patent; for the purpose of better illustrating embodiments of the invention, certain elements of the drawings may be omitted, enlarged or reduced and do not represent the size of the actual product; it will be appreciated by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numbers in the drawings of embodiments of the invention correspond to the same or similar components; in the description of the present invention, it should be understood that, if the terms "upper", "lower", "left", "right", "inner", "outer", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, only for convenience in describing the present invention and simplifying the description, rather than indicating or implying that the apparatus or elements being referred to must have a specific orientation, be constructed and operated in a specific orientation, so that the terms describing the positional relationships in the drawings are merely for exemplary illustration and should not be construed as limiting the present patent, and that the specific meaning of the terms described above may be understood by those of ordinary skill in the art according to specific circumstances.
In the description of the present invention, unless explicitly stated and limited otherwise, the term "coupled" or the like should be interpreted broadly, as it may be fixedly coupled, detachably coupled, or integrally formed, as indicating the relationship of components; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between the two parts or interaction relationship between the two parts. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
Taking A, B, C three clients as an example, how the matrix decomposition method based on secure aggregation and key exchange provided in this embodiment is specifically implemented will be described below:
recording a dispatcher in a federal learning framework as a server, each participating trainer as a client, M as a scoring matrix (such as a matrix corresponding to scoring movies by a plurality of imdb users, containing some missing items needing predictive filling), U A 、U B 、U C The embedded matrix of the local user of the client A, B, C (the local user is numerically represented by the matrix), and I represents the embedded matrix of the object (the common object is numerically represented by the matrix). As shown in fig. 2, the specific implementation steps of the matrix decomposition method based on secure aggregation and key exchange provided in this embodiment are as follows:
1. each party determines the good embedding dimension (the embedding dimension represents how much space of the dimension is utilized to numeralize users and articles), the server initializes the embedding matrix I of the articles according to the embedding dimension, and the client A, B, C initializes the own local user embedding matrix U according to the embedding dimension A 、U B 、U C
2. The server broadcasts the embedded matrix I to the client A, B, C;
3. client A calculates U using the embedding matrix I A Gradient of (2)Then updating the embedded matrix U of the local user A ,Wherein->m A Representing the total number of users of client A, I j An embedded vector representing an item j common to all clients,/->Representation I j Is transposed by the vector of>Score representing user i owned by client a with respect to item j, +.>Those items j,/which indicate that the user i owned by the client a actually evaluates too much>Representing those items j that are actually scored too much by user i owned by client a, summed with respect to token i; u (U) A The updating mode of (a) is as follows: />λ U U indicator A Is used for regularization parameters of (a);
gradient corresponding to client B, C respectivelyIs calculated by (a) and updating U respectively B 、U C The method of (1) is the same as the client A and will not be described in detail here;
4. client A uses locally updated U A Calculating gradients generated for the user on the embedding matrix IWherein->d represents the total number of common things, +.>Representing those users i,/having an over-scoring behavior for item j owned by client a>Summing those users i owned by client a who have a scoring behavior for item j with respect to token i;
gradient corresponding to client B, C respectivelyThe calculation method of (1) is the same as the client A and will not be described in detail here;
to avoid exposing the true gradients, the corresponding gradients for each client are preferably noisy, more preferably client A, B, C by differential privacy techniquesRespectively plus Gaussian noise n A 、n B 、n C . Taking client A as an example, n A Representing the generated random matrix (size and +.>The same) and (II)>Updated to->
5. Client A, B, C generates locally respective public and private keys, p A 、p B 、p C Respectively represent guestsThe client A, B, C generates a locally generated public key s A 、s B 、s C Representing the private keys locally generated by the client A, B, C, respectively. Taking client a as an example, private key s A For locally generated random numbers (less than p in value), p A (by private key s) A Calculated) isWherein g represents the generator (the primitive root of modulo p, can be selected to be smaller, can be simply taken as 2),>s representing g A The power of the power, p is a large prime number (2048 bits are generally available), and% p represents modulo operation on p, and g and p of each client are predetermined;
6. the server collects all public keys p A 、p B 、p C And the public key sent to the client A is p B 、p C The public key sent to client B is p A 、p C The public key sent to the client C is p A 、p B ;
7. Client a is based on public key p B 、p C And a locally generated private key s A Generating a key_agreement (A, B) of the client B and a key_agreement (A, C) of the client C; client B is based on public key p A 、p C And private key s B Generating a key_agreement (A, B) of the client A and a key_agreement (B, C) of the client C; client C is based on public key p A 、p B And its own private key s C Generate key_agreement (a, C) with client a and key_agreement (B, C) with client B. Taking the example of the client a as the example, respectively represent p B S of (2) A Power of the power of p C S of (2) A To power,% p represents modulo p.
8. Client A uses local key_agrement (A, B) as seed to generate mask (A, B), uses local key_agrement (A, C) as seed to generate mask (A, C), and updates gradient The client B uses the local key_agretement (A, B) as a seed to generate a mask (A, B), uses the local key_agretement (B, C) as a seed to generate a mask (B, C), and updates the gradient ∈ -> The client C uses the local key_agretement (A, C) as a seed to generate a mask (A, C), uses the local key_agretement (B, C) as a seed to generate a mask (B, C), and updates the gradient ∈ ->
Taking client A as an example, the mask (A, B) is a size, shape and shape generated with key_agrement (A, B) as seedThe same random matrix (the seed parameters are directly generated by calling the open source library function).
9. The server gathers the gradients to obtainThen update I to get +.>λ I Regularization parameters representing an embedding matrix I;
10. repeating the steps 2-8 until the maximum training times of the federal recommendation model or algorithm convergence is reached.
In short, the matrix decomposition method based on secure aggregation and key exchange provided in this embodiment, as shown in fig. 1, includes the steps of:
s1, recording a dispatcher of a federal learning framework as a server, taking each participating training party as a client, and broadcasting an initialized embedded matrix I of an article to each client by the server;
s2, each client X calculates an embedding matrix U of each local user by using the embedding matrix I X Gradient of (2)And utilize->Updating an embedded matrix U of a local user X ;
S3, each client X uses the U updated locally X Calculating the gradient generated to the embedding matrix I
S4, updating gradient by adopting key exchange methodAnd is about->Summarizing to obtain->After that, use->Updating the embedding momentMatrix I;
s5, repeating the steps S2-S4 until the termination condition of federal learning is reached.
In conclusion, the gradient of the matrix decomposed object matrix I is safely aggregated under the Federal learning framework, and a new idea is provided for Federal learning to enhance data security; gradient with safe polymerizationAnd->The training sample of the recommendation model (namely the federal learning model) is obtained, so that the user data is ensured not to leave the local area, and the recommendation model training process is safer; masking the gradient and noise, so that leakage of source data information caused by exposing the real gradient of the data is effectively avoided; compared with homomorphic encryption technology adopted in the background technology, the gradient summarization mode based on the security aggregation has the advantages that the calculation complexity of gradient encryption and decryption is lower, the calculation speed is faster, and the training speed of a recommended model is improved.
It should be understood that the above description is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be apparent to those skilled in the art that various modifications, equivalents, variations, and the like can be made to the present invention. However, such modifications are intended to fall within the scope of the present invention without departing from the spirit of the present invention. In addition, some terms used in the specification and claims of the present application are not limiting, but are merely for convenience of description.
Claims (3)
1. A matrix factorization method based on secure aggregation and key exchange, comprising the steps of:
s1, recording a dispatcher of a federal learning framework as a server, and each participating trainer as a client, wherein the server broadcasts an initialized embedded matrix I of an article to each client;
s2, each client X calculates the local use of each client by using the embedded matrix IHousehold embedded matrix U X Gradient of (2)And utilize->Updating an embedded matrix U of a local user X ;
S3, each client X uses the locally updated U X Calculating a gradient generated to the embedding matrix I
S4, the client X links the server to update the gradient by adopting a key exchange methodAnd is about->Summarizing to obtain->After that, use->Updating the embedded matrix I;
s5, repeating the steps S2-S4 until the termination condition of federal learning is reached;
in step S3, the embedding vector I of the associated article j in the embedding matrix I j Corresponding gradientCalculated by the following formula (3):
in the formula (3),representation->Is the j-th row of (2);
an embedded vector I representing the item j common to all the clients j Is a vector transpose of (2);
representing the embedding matrix U X An embedded vector of a related local user i;
a score representing the user i locally owned by the client X with respect to the item j;
representing those users i owned by the client X who have a scoring behavior on item j;
representing the summation of those users i owned by the client X who have scored the item j with respect to the token i.
2. The matrix factorization method based on secure aggregation and key exchange of claim 1, wherein in step S4, the summary is performedIs expressed by the following formula (7):
in step S4, the method of updating the embedding matrix I is expressed by the following formula (8):
in the formula (8), lambda I Representing regularization parameters of the embedding matrix I.
3. The method of matrix factorization based on secure aggregation and key exchange according to claim 1, characterized in that for the gradient generated in step S3After noise addition, the process goes to step S4, for the gradient +.>The noise adding method is expressed by the following formula (9):
in the formula (9), n X Representing gaussian noise.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310620692.9A CN116545734A (en) | 2022-07-28 | 2022-07-28 | Matrix decomposition method based on security aggregation and key exchange |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310620692.9A CN116545734A (en) | 2022-07-28 | 2022-07-28 | Matrix decomposition method based on security aggregation and key exchange |
CN202210899003.8A CN115225405B (en) | 2022-07-28 | 2022-07-28 | Matrix decomposition method based on security aggregation and key exchange under federal learning framework |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210899003.8A Division CN115225405B (en) | 2022-07-28 | 2022-07-28 | Matrix decomposition method based on security aggregation and key exchange under federal learning framework |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116545734A true CN116545734A (en) | 2023-08-04 |
Family
ID=83614120
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310622218.XA Pending CN116545735A (en) | 2022-07-28 | 2022-07-28 | Matrix decomposition method under federal learning framework |
CN202310620692.9A Pending CN116545734A (en) | 2022-07-28 | 2022-07-28 | Matrix decomposition method based on security aggregation and key exchange |
CN202210899003.8A Active CN115225405B (en) | 2022-07-28 | 2022-07-28 | Matrix decomposition method based on security aggregation and key exchange under federal learning framework |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310622218.XA Pending CN116545735A (en) | 2022-07-28 | 2022-07-28 | Matrix decomposition method under federal learning framework |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210899003.8A Active CN115225405B (en) | 2022-07-28 | 2022-07-28 | Matrix decomposition method based on security aggregation and key exchange under federal learning framework |
Country Status (1)
Country | Link |
---|---|
CN (3) | CN116545735A (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115249074B (en) * | 2022-07-28 | 2023-04-14 | 上海光之树科技有限公司 | Distributed federal learning method based on Spark cluster and Ring-AllReduce architecture |
CN115865307B (en) * | 2023-02-27 | 2023-05-09 | 蓝象智联(杭州)科技有限公司 | Data point multiplication operation method for federal learning |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10630655B2 (en) * | 2017-05-18 | 2020-04-21 | Robert Bosch Gmbh | Post-quantum secure private stream aggregation |
CN112732297B (en) * | 2020-12-31 | 2022-09-27 | 平安科技(深圳)有限公司 | Method and device for updating federal learning model, electronic equipment and storage medium |
CN113420232B (en) * | 2021-06-02 | 2022-05-10 | 杭州电子科技大学 | Privacy protection-oriented federated recommendation method for neural network of graph |
CN114564742B (en) * | 2022-02-18 | 2024-05-14 | 北京交通大学 | Hash learning-based lightweight federal recommendation method |
CN114510652B (en) * | 2022-04-20 | 2023-04-07 | 宁波大学 | Social collaborative filtering recommendation method based on federal learning |
-
2022
- 2022-07-28 CN CN202310622218.XA patent/CN116545735A/en active Pending
- 2022-07-28 CN CN202310620692.9A patent/CN116545734A/en active Pending
- 2022-07-28 CN CN202210899003.8A patent/CN115225405B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN115225405B (en) | 2023-04-21 |
CN116545735A (en) | 2023-08-04 |
CN115225405A (en) | 2022-10-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115225405B (en) | Matrix decomposition method based on security aggregation and key exchange under federal learning framework | |
Xing et al. | Mutual privacy preserving $ k $-means clustering in social participatory sensing | |
US7526084B2 (en) | Secure classifying of data with Gaussian distributions | |
Zhao et al. | PVD-FL: A privacy-preserving and verifiable decentralized federated learning framework | |
CN109194507B (en) | Non-interactive privacy protection neural network prediction method | |
Rahulamathavan et al. | Privacy-preserving multi-class support vector machine for outsourcing the data classification in cloud | |
CN111104968B (en) | Safety SVM training method based on block chain | |
Minelli | Fully homomorphic encryption for machine learning | |
CN115842627A (en) | Decision tree evaluation method, device, equipment and medium based on secure multi-party computation | |
Zhang et al. | SecureTrain: An approximation-free and computationally efficient framework for privacy-preserved neural network training | |
CN115186831A (en) | Deep learning method with efficient privacy protection | |
CN116167088A (en) | Method, system and terminal for privacy protection in two-party federal learning | |
CN113098682B (en) | Multi-party security computing method and device based on block chain platform and electronic equipment | |
CN113962286A (en) | Decentralized logistic regression classification prediction method based on piecewise function | |
CN115865307A (en) | Data point multiplication operation method for federal learning | |
CN116451804A (en) | Federal learning method based on homomorphic encryption and related equipment thereof | |
CN114358323A (en) | Third-party-based efficient Pearson coefficient calculation method in federated learning environment | |
Ogiela et al. | Security and privacy in distributed information management | |
Sun et al. | A lottery SMC protocol for the selection function in software defined wireless sensor networks | |
CN116248252B (en) | Data dot multiplication processing method for federal learning | |
CN114494803B (en) | Image data annotation method and system based on security calculation | |
Weng et al. | Privacy-Preserving Neural Network Based on Multi-key NTRU Cryptosystem | |
Rana | Cryptological Mathematics | |
CN108270562B (en) | Anti-quantum key agreement method | |
CN116846538A (en) | Grating-based additive homomorphic threshold decryption method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |