CN115643105B - Federal learning method and device based on homomorphic encryption and depth gradient compression - Google Patents

Federal learning method and device based on homomorphic encryption and depth gradient compression Download PDF

Info

Publication number
CN115643105B
CN115643105B CN202211438863.8A CN202211438863A CN115643105B CN 115643105 B CN115643105 B CN 115643105B CN 202211438863 A CN202211438863 A CN 202211438863A CN 115643105 B CN115643105 B CN 115643105B
Authority
CN
China
Prior art keywords
parameter
parameters
server
depth gradient
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211438863.8A
Other languages
Chinese (zh)
Other versions
CN115643105A (en
Inventor
方黎明
王伊蕾
吕庆喆
李涛
逯兆博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Liang'an Technology Co ltd
Original Assignee
Hangzhou Liang'an Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Liang'an Technology Co ltd filed Critical Hangzhou Liang'an Technology Co ltd
Priority to CN202211438863.8A priority Critical patent/CN115643105B/en
Publication of CN115643105A publication Critical patent/CN115643105A/en
Application granted granted Critical
Publication of CN115643105B publication Critical patent/CN115643105B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a federal learning method and a device based on homomorphic encryption and depth gradient compression, which specifically comprise the following steps: initializing key parameters to generate a public key and a private key; randomly selecting a plurality of users as participants of the training; the server sends the initialization parameters or the parameter cryptograph to the participant; if the participant receives the parameter ciphertext, decrypting the parameter ciphertext by using a private key to obtain a plaintext parameter, and updating the model to be trained through the plaintext parameter; predicting the local data set by using the updated training model; judging whether the prediction result reaches a termination condition; if the terminal condition is not met, processing the prediction result by adopting a depth gradient compression algorithm to obtain a new model parameter; encrypting the model parameter by using a public key to obtain an encrypted parameter; and sends the encryption parameters to the server. After receiving the encryption parameters sent by each participant, the server performs aggregation operation to obtain new encryption parameters; and taking the new encryption parameter as a parameter ciphertext.

Description

Federal learning method and device based on homomorphic encryption and depth gradient compression
Technical Field
The invention relates to the technical field of federal learning, in particular to a federal learning method and a device based on homomorphic encryption and depth gradient compression.
Background
Gradient transmitted in the federal learning process may cause leakage of data of participants due to gradient attack, homomorphic encryption is used to protect privacy of data parties, however, after the homomorphic encryption is added, communication overhead of federal learning is increased, and in order to alleviate the problem, a depth gradient compression algorithm is used to compress the gradient, so that the scheme can be deployed in a large-scale scene of internet of things equipment training;
in the prior art, the federal learning scheme using homomorphic encryption usually generates great communication overhead, mainly comes from encrypted ciphertext expansion, and causes difficulty in deploying the federal learning scheme using homomorphic encryption in a large-scale equipment training scene.
Disclosure of Invention
The invention aims to provide a federal learning method and a device based on homomorphic encryption and depth gradient compression, which are used for overcoming the defects in the prior art.
In order to achieve the purpose, the invention provides the following technical scheme:
the invention discloses a federal learning method based on homomorphic encryption and depth gradient compression, which specifically comprises the following steps:
s1, initializing key parameters to generate a public key and a private key, reserving a public and private key pair by each user, and reserving the public key by a server;
s2, selecting a plurality of users as participants of the training;
s3, the server sends the initialization parameters or the parameter ciphertext to the participants;
s4, the participant receives the initialization parameters or the parameter ciphertext sent by the server;
s41, if the participant receives the initialization parameters, initializing the model to be trained and returning to the step S4;
s42, if the parameter ciphertext is received by the parameter party, decrypting the parameter ciphertext by using a private key to obtain a plaintext parameter, updating the model to be trained through the plaintext parameter, and entering the step S5;
s5, predicting the local data set by using the updated training model; judging whether the prediction result reaches a termination condition; if the terminal condition is not met, processing the prediction result by adopting a depth gradient compression algorithm to obtain a new model parameter; encrypting the model parameter by using a public key to obtain an encrypted parameter; sending the encryption parameters to a server, and entering step S6; if the termination condition is reached, ending the program;
s6, after receiving the encryption parameters sent by each participant, the server performs aggregation operation to obtain new encryption parameters; and returning the new encryption parameter as a parameter ciphertext to the step S3.
Preferably, in step S1, a secret key parameter is initialized by any user to generate a public key and a private key, and then the public key and the private key are broadcasted to other users and a server; or initializing the key parameters through the key server to generate a public key and a private key, and broadcasting to all users and servers.
Preferably, step S2 specifically includes the following substeps:
s21, the server builds and maintains an IP pool according to the corresponding IP of all the users;
s22, the server broadcasts information to be trained, and the user responds to the information and sends an active signal;
s23, the server records the IP of the active signal and marks the IP of the active signal as an active state in an IP pool;
and S24, randomly selecting a plurality of IPs from all the IPs in the active state as the participants of the training.
Preferably, in step S5, the updated training model is used to predict the local data set; judging whether the prediction result reaches a termination condition; if the terminal condition is not reached, processing the prediction result by adopting a depth gradient compression algorithm to obtain a new model parameter, and specifically comprising the following substeps:
s51, a participant divides a plurality of subsets with the same size in a local data set, and data in the subsets consists of data attributes and data labels;
s52, predicting the data in the subset by using the updated training model to obtain a prediction label;
s53, calculating the difference between the prediction label and the data label through a loss function to obtain a difference value;
s54, judging whether the difference value reaches a termination condition; if the termination condition is reached, ending the program; otherwise, calculating a gradient value;
s55, processing gradient values of the subsets by adopting a depth gradient compression algorithm; new model parameters are obtained.
The termination condition in the preferred step S5 includes one of the following:
a1, the difference values of the previous and the next two times tend to be stable;
a2, the magnitude of the subsequent difference value is larger than that of the previous difference value;
a3, the difference value reaches a specified threshold value;
and A4, enabling the iteration times to reach the maximum times.
The invention also discloses a federated learning device based on homomorphic encryption and depth gradient compression, which comprises a memory and one or more processors, wherein the memory stores executable codes, and the one or more processors are used for realizing the federated learning method based on homomorphic encryption and depth gradient compression when executing the executable codes.
The invention also discloses a computer readable storage medium, which stores a program, when the program is executed by a processor, the method realizes the federal learning method based on homomorphic encryption and depth gradient compression.
The invention has the beneficial effects that:
1. the homomorphic encryption gradient is adopted, so that privacy leakage caused by the gradient can be prevented, and the data security of the local client side is protected;
2. the communication bandwidth in the federal learning training process can be reduced by adopting the depth gradient compression, and the training cost is reduced;
the features and advantages of the present invention will be described in detail by embodiments with reference to the accompanying drawings.
Drawings
FIG. 1 is a flow chart diagram of a federated learning method based on homomorphic encryption and depth gradient compression according to the present invention;
FIG. 2 is a schematic structural diagram of a federated learning apparatus based on homomorphic encryption and depth gradient compression.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood, however, that the detailed description herein of specific embodiments is intended to illustrate the invention and not to limit the scope of the invention. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present invention.
Referring to fig. 1, an embodiment of the present invention provides a federal learning method based on homomorphic encryption and depth gradient compression, which specifically includes the following steps:
s1, initializing key parameters to generate a public key and a private key, reserving a public and private key pair by each user, and reserving the public key by a server;
s2, selecting a plurality of users as participants of the training;
s3, the server sends the initialization parameters or the parameter ciphertext to the participant;
s4, the participant receives the initialization parameters or the parameter ciphertext sent by the server;
s41, if the participant receives the initialization parameters, initializing the model to be trained and returning to the step S4;
s42, if the parameter ciphertext is received by the parameter party, decrypting the parameter ciphertext by using a private key to obtain a plaintext parameter, updating the model to be trained through the plaintext parameter, and entering the step S5;
s5, predicting the local data set by using the updated training model; judging whether the prediction result reaches a termination condition; if the terminal condition is not met, processing the prediction result by adopting a depth gradient compression algorithm to obtain a new model parameter; encrypting the model parameter by using a public key to obtain an encrypted parameter; sending the encryption parameters to a server, and entering step S6; if the termination condition is reached, the routine is ended.
S6, after receiving the encryption parameters sent by each participant, the server performs aggregation operation to obtain new encryption parameters; and returning the new encryption parameter as a parameter ciphertext to the step S3.
In a possible embodiment, in step S1, a secret key parameter is initialized by any user to generate a public key and a private key, and then the public key and the private key are broadcasted to other users and a server; or initializing the key parameters through the key server to generate a public key and a private key, and broadcasting to all users and servers.
In a possible embodiment, step S2 specifically includes the following sub-steps:
s21, the server builds and maintains an IP pool according to the corresponding IP of all the users;
s22, the server broadcasts information to be trained, and the user responds to the information and sends an active signal;
s23, the server records the IP of the active signal and marks the IP of the active signal as an active state in an IP pool;
and S24, randomly selecting a plurality of IPs from all the IPs in the active state as the participants of the training.
In a possible embodiment, step S5 specifically includes the following sub-steps:
s51, a participant divides a plurality of subsets with the same size in a local data set, and data in the subsets consists of data attributes and data labels;
s52, predicting the data in the subset by using the updated training model to obtain a prediction label;
s53, calculating the difference between the prediction label and the data label through a loss function to obtain a difference value;
s54, judging whether the difference value reaches a termination condition or not; if the termination condition is reached, ending the program; otherwise, calculating a gradient value;
s55, processing the gradient values of the subsets by adopting a depth gradient compression algorithm; new model parameters are obtained.
In a possible embodiment, the termination condition in step S5 comprises one of the following:
a1, the difference values of the two previous and subsequent times tend to be stable;
a2, the magnitude of the subsequent difference value is larger than that of the previous difference value;
a3, the difference value reaches a specified threshold value;
and A4, enabling the iteration times to reach the maximum times.
The embodiment is as follows:
the method mainly comprises four stages, namely an initialization stage, a model training stage, a parameter aggregation stage and a parameter updating stage, and at least one user and one server are needed, wherein the detailed operation description of the training step is as follows:
after any user generates a public and private key pair, the public and private key pair is broadcasted to other users, or generated by a key server and broadcasted to all users; for the server, only the public key of the secret key can be received, and the private key cannot be obtained;
configure all users in advance, unify training models, unify learning rates
Figure 336987DEST_PATH_IMAGE001
The number of iteration rounds L, etc.;
the server randomly generates parameters corresponding to the training model and broadcasts the parameters to all users to initialize the users;
firstly, the server broadcasts training information, namely, all user response information sends active signals;
and the server records the corresponding IP of all the response signals, establishes and maintains the IP pool, and sets the IP to be in a dormant state if the IP which does not respond for many times exists.
And randomly selecting a certain number of the IPs in all the active states as the participants of the training.
Selected participants
Figure DEST_PATH_IMAGE002
Obtaining the parameter sent by the server, judging whether the parameter is a cipher text, if so, decrypting by using a private key sk to obtain a new parameter
Figure DEST_PATH_IMAGE003
If the parameter is a plaintext, the parameter is not processed, and the obtained parameter is used for updating a local model and then training.
Each participant in the training
Figure 237815DEST_PATH_IMAGE002
The local data set is then partitioned into subsets of size B, and a subset is then randomly selected
Figure DEST_PATH_IMAGE004
Wherein x represents data attribute, y represents data label, and predicting data in subset by using the model to obtain prediction label
Figure DEST_PATH_IMAGE005
By passing
Figure DEST_PATH_IMAGE006
The loss function calculates the difference between the predicted value and the true value in the data, i.e.
Figure DEST_PATH_IMAGE007
Then calculating the gradient of the gradient to obtain a gradient value G;
Figure DEST_PATH_IMAGE008
Figure 984929DEST_PATH_IMAGE005
value of and
Figure DEST_PATH_IMAGE009
(ii) related;
Figure DEST_PATH_IMAGE010
further, we calculate all
Figure DEST_PATH_IMAGE011
Gradient of (D), is noted
Figure DEST_PATH_IMAGE012
Figure DEST_PATH_IMAGE013
Calculating gradient values on a plurality of data sets with the size of B for a plurality of times by adopting a depth gradient compression algorithm, accumulating gradients, executing a gradient sparsification step, improving gradient transmission efficiency, then reducing the problem of overlarge updating by using a momentum factor correction method, cutting old gradients, and reducing the influence of the old gradients on a final result; obtaining new model parameters
Figure DEST_PATH_IMAGE014
For the model parameters
Figure 144253DEST_PATH_IMAGE014
Compress, then encrypt, and send the ciphertext to the server, where the ciphertext
Figure DEST_PATH_IMAGE015
Is composed of
Figure DEST_PATH_IMAGE016
Server participant
Figure 942314DEST_PATH_IMAGE002
Passed ciphertext parameters
Figure 843405DEST_PATH_IMAGE015
And the server enters a waiting state until all the participant ciphertext parameters of the round of training are received.
The server performs security calculations, aggregates all parameters to obtain new ciphertext parameters,
Figure DEST_PATH_IMAGE017
taking the parameters as parameters to be updated by each participant of the next round of training; until the model converges or other termination conditions are met, i.e. one of the following:
a1, the difference values of the previous and the next two times tend to be stable;
a2, the magnitude of the subsequent difference value is larger than that of the previous difference value;
a3, the difference value reaches a specified threshold value;
and A4, enabling the iteration times to reach the maximum times.
The embodiment of the invention, which is based on homomorphic encryption and depth gradient compression, of the federal learning device can be applied to any equipment with data processing capability, such as computers and other equipment or devices. The apparatus embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. The software implementation is taken as an example, and as a logical device, the device is formed by reading corresponding computer program instructions in the nonvolatile memory into the memory for running through the processor of any device with data processing capability. From a hardware aspect, as shown in fig. 2, a hardware structure diagram of any device with data processing capability where a federated learning apparatus based on homomorphic encryption and depth gradient compression is located according to the present invention is shown in fig. 2, except for the processor, the memory, the network interface, and the nonvolatile memory shown in fig. 2, any device with data processing capability where an apparatus is located in an embodiment may also include other hardware according to the actual function of the any device with data processing capability, which is not described again. The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the invention. One of ordinary skill in the art can understand and implement it without inventive effort.
An embodiment of the present invention further provides a computer-readable storage medium, on which a program is stored, where the program, when executed by a processor, implements a federated learning apparatus based on homomorphic encryption and depth gradient compression in the foregoing embodiments.
The computer readable storage medium may be an internal storage unit, such as a hard disk or a memory, of any data processing capability device described in any of the foregoing embodiments. The computer readable storage medium may also be any external storage device of a device with data processing capabilities, such as a plug-in hard disk, a Smart Media Card (SMC), an SD Card, a Flash memory Card (Flash Card), etc. provided on the device. Further, the computer readable storage medium may include both an internal storage unit and an external storage device of any data processing capable device. The computer-readable storage medium is used for storing the computer program and other programs and data required by the arbitrary data processing capable device, and may also be used for temporarily storing data that has been output or is to be output.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents or improvements made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (7)

1. A federal learning method based on homomorphic encryption and depth gradient compression is characterized by comprising the following steps:
s1, initializing key parameters to generate a public key and a private key, reserving a public-private key pair by each user, and reserving a public key by a server;
s2, selecting a plurality of users as participants of the training;
s3, the server sends the initialization parameters or the parameter ciphertext to the participants;
s4, the participant receives the initialization parameters or the parameter ciphertext sent by the server;
s41, if the participant receives the initialization parameters, initializing the model to be trained and returning to the step S4;
s42, if the parameter ciphertext is received by the parameter party, decrypting the parameter ciphertext by using a private key to obtain a plaintext parameter, updating the model to be trained through the plaintext parameter, and entering the step S5;
s5, predicting the local data set by using the updated training model; judging whether the prediction result reaches a termination condition; if the terminal condition is not met, processing the prediction result by adopting a depth gradient compression algorithm to obtain a new model parameter; encrypting the model parameter by using a public key to obtain an encrypted parameter; sending the encryption parameters to a server, and entering step S6; if the termination condition is reached, ending the program;
the step of processing the prediction result by using the depth gradient compression algorithm to obtain new model parameters specifically comprises the following operations:
each participant randomly divides the local data set into subsets of size B and randomly selects one subset
Figure QLYQS_3
Wherein x represents data attribute, y represents label of data, and predicting label by training model
Figure QLYQS_5
(ii) a By passing
Figure QLYQS_8
The loss function calculates the difference between the predicted value and the true value in the data, i.e.
Figure QLYQS_2
Then, the gradient is calculated to obtain the gradient value
Figure QLYQS_4
Figure QLYQS_7
Figure QLYQS_9
The number of data in the subset is;
Figure QLYQS_1
value of and
Figure QLYQS_6
(ii) related;
Figure QLYQS_10
Figure QLYQS_11
calculate all
Figure QLYQS_12
Figure QLYQS_13
Gradient of (2), is noted
Figure QLYQS_14
Calculating gradient values on a plurality of data sets with the size of B for a plurality of times by adopting a depth gradient compression algorithm, accumulating gradients, executing a gradient sparsification step, then reducing the problem of overlarge updating by using a momentum factor correction method, and then cutting old gradients; obtaining new model parameters
Figure QLYQS_15
S6, after receiving the encryption parameters sent by each participant, the server performs aggregation operation to obtain new encryption parameters; and returning the new encryption parameter as a parameter ciphertext to the step S3.
2. The federated learning method based on homomorphic encryption and depth gradient compression as claimed in claim 1, wherein: in the step S1, a key parameter is initialized by any user to generate a public key and a private key, and then the public key and the private key are broadcasted to other users and a server; or initializing the key parameters through the key server to generate a public key and a private key, and broadcasting to all users and servers.
3. The federal learning method based on homomorphic encryption and depth gradient compression as claimed in claim 1, wherein the step S2 specifically comprises the following substeps:
s21, the server builds and maintains an IP pool according to the corresponding IP of all the users;
s22, the server broadcasts information to be trained, and the user responds to the information and sends an active signal;
s23, the server records the IP of the active signal and marks the IP of the active signal as an active state in an IP pool;
and S24, randomly selecting a plurality of IPs from all the IPs in the active state as the participants of the training.
4. The federated learning method based on homomorphic encryption and depth gradient compression as claimed in claim 1, wherein the local data set is predicted using the updated training model in step S5; judging whether the prediction result reaches a termination condition; if the terminal condition is not met, processing the prediction result by adopting a depth gradient compression algorithm to obtain a new model parameter; the method specifically comprises the following substeps:
s51, a participant divides a plurality of subsets with the same size in a local data set, and data in the subsets consists of data attributes and data labels;
s52, predicting the data in the subset by using the updated training model to obtain a prediction label;
s53, calculating the difference between the prediction label and the data label through a loss function to obtain a difference value;
s54, judging whether the difference value reaches a termination condition or not; if the termination condition is reached, ending the program; otherwise, calculating a gradient value;
s55, processing the gradient values of the subsets by adopting a depth gradient compression algorithm; and obtaining new model parameters.
5. The federated learning method based on homomorphic encryption and depth gradient compression as claimed in claim 4, wherein the termination condition in step S5 includes one of the following:
a1, the difference values of the two previous and subsequent times tend to be stable;
a2, the magnitude of the subsequent difference value is larger than that of the previous difference value;
a3, the difference value reaches a specified threshold value;
and A4, enabling the iteration times to reach the maximum times.
6. The utility model provides a federal learning device based on homomorphic encryption and depth gradient compression which characterized in that: comprising a memory having stored therein executable code and one or more processors configured to implement a method of federated learning based on homomorphic encryption and depth gradient compression as described in any of claims 1-5 when the executable code is executed by the one or more processors.
7. A computer-readable storage medium characterized by: stored thereon a program which, when executed by a processor, implements a method of federal learning based on homomorphic encryption and depth gradient compression as claimed in any one of claims 1 to 5.
CN202211438863.8A 2022-11-17 2022-11-17 Federal learning method and device based on homomorphic encryption and depth gradient compression Active CN115643105B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211438863.8A CN115643105B (en) 2022-11-17 2022-11-17 Federal learning method and device based on homomorphic encryption and depth gradient compression

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211438863.8A CN115643105B (en) 2022-11-17 2022-11-17 Federal learning method and device based on homomorphic encryption and depth gradient compression

Publications (2)

Publication Number Publication Date
CN115643105A CN115643105A (en) 2023-01-24
CN115643105B true CN115643105B (en) 2023-03-10

Family

ID=84948695

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211438863.8A Active CN115643105B (en) 2022-11-17 2022-11-17 Federal learning method and device based on homomorphic encryption and depth gradient compression

Country Status (1)

Country Link
CN (1) CN115643105B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115865307B (en) * 2023-02-27 2023-05-09 蓝象智联(杭州)科技有限公司 Data point multiplication operation method for federal learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553483A (en) * 2020-04-30 2020-08-18 同盾控股有限公司 Gradient compression-based federated learning method, device and system
CN112383396A (en) * 2021-01-08 2021-02-19 索信达(北京)数据技术有限公司 Method and system for training federated learning model
CN113037460A (en) * 2021-03-03 2021-06-25 北京工业大学 Federal learning privacy protection method based on homomorphic encryption and secret sharing

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220230092A1 (en) * 2021-01-21 2022-07-21 EMC IP Holding Company LLC Fast converging gradient compressor for federated learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553483A (en) * 2020-04-30 2020-08-18 同盾控股有限公司 Gradient compression-based federated learning method, device and system
CN112383396A (en) * 2021-01-08 2021-02-19 索信达(北京)数据技术有限公司 Method and system for training federated learning model
CN113037460A (en) * 2021-03-03 2021-06-25 北京工业大学 Federal learning privacy protection method based on homomorphic encryption and secret sharing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周俊 ; 方国英 ; 吴楠 ; .联邦学习安全与隐私保护研究综述.2020,(第04期),全文. *

Also Published As

Publication number Publication date
CN115643105A (en) 2023-01-24

Similar Documents

Publication Publication Date Title
Mishra et al. Delphi: A cryptographic inference system for neural networks
Mazurczyk et al. Steganography in modern smartphones and mitigation techniques
US8457304B2 (en) Efficient encoding processes and apparatus
JP6289680B2 (en) Packet transmission device, packet reception device, packet transmission program, and packet reception program
CN110610105A (en) Secret sharing-based authentication method for three-dimensional model file in cloud environment
US8464070B2 (en) Apparatus and method for transmitting and receiving data
CN115643105B (en) Federal learning method and device based on homomorphic encryption and depth gradient compression
WO2017107047A1 (en) User attribute matching method and terminal
CN109067517A (en) Encryption, the communication means for decrypting device, encryption and decryption method and secrete key
JP2012528532A (en) Efficient method for calculating secret functions using resettable tamper-resistant hardware tokens
Cao et al. A privacy-preserving outsourcing data storage scheme with fragile digital watermarking-based data auditing
KR102038963B1 (en) Method and Apparatus for Selectively Providing Protection of Screen information data
Islam et al. Denoising and error correction in noisy AES-encrypted images using statistical measures
Hrytskiv et al. Cryptography and steganography of video information in modern communications
US8751819B1 (en) Systems and methods for encoding data
US20220311756A1 (en) Partial authentication tag aggregation to support interleaved encryption and authentication operations on multiple data records
Bentafat et al. Towards real-time privacy-preserving video surveillance
CN117349685A (en) Clustering method, system, terminal and medium for communication data
US10200356B2 (en) Information processing system, information processing apparatus, information processing method, and recording medium
KR101608378B1 (en) Asymmetric based image authentication method using photon-counting double random phase encoding
US10320559B2 (en) Network communication encoder using key pattern encryption
Shelke et al. Audio encryption algorithm using modified elliptical curve cryptography and arnold transform for audio watermarking
WO2018054144A1 (en) Method, apparatus, device and system for dynamically generating symmetric key
Peng et al. On the security of fully homomorphic encryption for data privacy in Internet of Things
CN112423277B (en) Security certificate recovery in bluetooth mesh networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant