CN116644433A - Data privacy and model safety test method for longitudinal federal learning - Google Patents
Data privacy and model safety test method for longitudinal federal learning Download PDFInfo
- Publication number
- CN116644433A CN116644433A CN202310619376.XA CN202310619376A CN116644433A CN 116644433 A CN116644433 A CN 116644433A CN 202310619376 A CN202310619376 A CN 202310619376A CN 116644433 A CN116644433 A CN 116644433A
- Authority
- CN
- China
- Prior art keywords
- label
- attacker
- data
- model
- server
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000009781 safety test method Methods 0.000 title abstract 2
- 238000000034 method Methods 0.000 claims abstract description 58
- 238000012360 testing method Methods 0.000 claims abstract description 28
- 238000012549 training Methods 0.000 claims description 36
- 230000008569 process Effects 0.000 claims description 21
- 238000012216 screening Methods 0.000 claims description 10
- 230000002776 aggregation Effects 0.000 claims description 7
- 238000004220 aggregation Methods 0.000 claims description 7
- 230000000694 effects Effects 0.000 claims description 6
- 231100000572 poisoning Toxicity 0.000 claims description 6
- 230000000607 poisoning effect Effects 0.000 claims description 6
- 230000006870 function Effects 0.000 claims description 4
- 230000007123 defense Effects 0.000 abstract description 12
- 230000002708 enhancing effect Effects 0.000 abstract description 3
- 239000000523 sample Substances 0.000 description 21
- 239000000243 solution Substances 0.000 description 5
- 238000002347 injection Methods 0.000 description 3
- 239000007924 injection Substances 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000008030 elimination Effects 0.000 description 2
- 238000003379 elimination reaction Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000007943 implant Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/57—Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
- G06F21/577—Assessing vulnerabilities and evaluating computer system security
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/20—Ensemble learning
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/50—Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Computer Security & Cryptography (AREA)
- Computer Hardware Design (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Bioethics (AREA)
- General Health & Medical Sciences (AREA)
- Computer And Data Communications (AREA)
Abstract
Description
技术领域technical field
本发明涉及人工智能分布式联邦学习的安全技术领域,更具体的说是涉及一种用于纵向联邦学习的数据隐私与模型安全测试方法。The invention relates to the security technical field of artificial intelligence distributed federated learning, and more specifically relates to a data privacy and model security testing method for vertical federated learning.
背景技术Background technique
联邦学习是一种人工智能领域的分布式学习范式,它允许多个数据所有者协作训练全局模型,而不会泄露他们的本地私有数据。根据客户端之间的数据共享模式,联邦学习可以分为横向联邦学习和纵向联邦学习。Federated learning is a distributed learning paradigm in the field of artificial intelligence, which allows multiple data owners to collaboratively train a global model without revealing their local private data. According to the data sharing mode between clients, federated learning can be divided into horizontal federated learning and vertical federated learning.
在横向联邦学习中,参与者用相同的特征空间的带标签数据集,但样本空间不同。在训练过程中,每个参与者通过监督学习训练一个本地分类器,并将权重发送到集中的服务器进行聚合、更新。在纵向联邦学习中,参与者拥有具有不同特征空间但相同样本空间的数据集。在训练过程中,每个参与者从服务器端通过梯度训练一个局部模型,并将局部模型输出的中间结果发送到服务器端进行聚合。在这个过程中有且仅有服务器持有数据标签,服务器可以根据聚合的中间结果通过监督学习训练最终的分类器。由于后门攻击已经在横向联邦学习中得到了广泛的研究,因此本发明将重点放在纵向联邦学习上。In lateral federated learning, participants use labeled datasets with the same feature space but different sample spaces. During the training process, each participant trains a local classifier through supervised learning, and sends the weights to a centralized server for aggregation and updating. In longitudinal federated learning, participants have datasets with different feature spaces but the same sample space. During the training process, each participant trains a local model through gradients from the server side, and sends the intermediate results output by the local model to the server side for aggregation. In this process, only the server holds the data label, and the server can train the final classifier through supervised learning according to the intermediate results of the aggregation. Since backdoor attacks have been extensively studied in horizontal federated learning, this invention focuses on vertical federated learning.
而后门攻击旨在训练一个后门模型,该模型在正常的输入样本上正常工作,但将特殊输入(带有攻击者恶意设计的触发器的输入样本)错误地分类为目标标签或其他错误的标签。联邦学习背景是攻击者作为参与者进行后门攻击的理想选择,因为服务器不允许检查参与者的本地数据和本地模型。后门攻击已经在横向联邦学习场景中得到了广泛的研究,在这种情况下,攻击者将恶意客户端的模型更新上载到服务器,以便在全局模型中插入后门。While the backdoor attack aims to train a backdoor model that works normally on normal input samples but misclassifies special inputs (input samples with triggers maliciously designed by the attacker) as target labels or other wrong labels . The federated learning context is ideal for an attacker acting as a participant for backdoor attacks, since the server is not allowed to inspect the participant's local data and local models. Backdoor attacks have been extensively studied in horizontal federated learning scenarios, where an attacker uploads model updates from a malicious client to a server in order to insert a backdoor in the global model.
与传统的联邦学习不同,纵向联邦学习为后门攻击带来了新的挑战,最迫在眉睫的挑战是缺乏对训练数据标签和服务器模型的访问。为了应对这些挑战,合理设计方案获取目标数据标签信息,并精心构造触发器从而能够将后门植入服务器上的黑盒模型中,就显得尤为重要。为此我们提出了本发明,这是一种针对于纵向联邦学习的标签推理攻击和后门攻击方法,具有有效的标签推断和后门攻击策略,也为测试纵向联邦学习算法及相应防御方法的隐私保护和模型安全能力提供了一种新的工具。Different from traditional federated learning, vertical federated learning brings new challenges for backdoor attacks, the most imminent challenge is the lack of access to training data labels and server models. In order to cope with these challenges, it is particularly important to design a reasonable solution to obtain target data label information, and carefully construct triggers so that the backdoor can be implanted into the black box model on the server. For this reason, we propose the present invention, which is a label inference attack and backdoor attack method for vertical federated learning. It has an effective label inference and backdoor attack strategy, and it is also for testing the privacy protection of vertical federated learning algorithms and corresponding defense methods. and model security capabilities provide a new tool.
因此,提出一种用于纵向联邦学习的数据隐私与模型安全测试方法,来解决现有技术存在的困难,是本领域技术人员亟需解决的问题。Therefore, it is an urgent problem for those skilled in the art to propose a data privacy and model security testing method for vertical federated learning to solve the difficulties existing in the prior art.
发明内容Contents of the invention
有鉴于此,本发明提供了一种用于纵向联邦学习的数据隐私与模型安全测试方法,通过实施针对于纵向联邦学习的标签推理攻击、后门攻击,来测试纵向联邦学习算法及相应的防御方法在数据隐私和模型安全方面的防御能力。In view of this, the present invention provides a data privacy and model security testing method for vertical federated learning, by implementing label inference attacks and backdoor attacks for vertical federated learning, to test vertical federated learning algorithms and corresponding defense methods Defensive capabilities in terms of data privacy and model security.
为了实现上述目的,本发明提供如下技术方案:In order to achieve the above object, the present invention provides the following technical solutions:
一种用于纵向联邦学习的数据隐私与模型安全测试方法,包括以下步骤:A data privacy and model security testing method for longitudinal federated learning, comprising the following steps:
(1)标签推理攻击步骤,该步骤包括以下内容:(1) Label inference attack step, which includes the following:
S101、攻击者作为一个恶意的客户端参与纵向联邦学习的训练过程;S101, the attacker participates in the training process of vertical federated learning as a malicious client;
S102、攻击者通过中间结果替换方法来窃取数据的标签信息;S102, the attacker steals the label information of the data through the intermediate result replacement method;
S103、攻击者进行动态调整以进行隐蔽的标签推理攻击;在某一轮训练中,给定一组样本T,是通过标签推理攻击推断其具有标签的一组样本,选择其中具有最小返回梯度的样本,即用于接下来的替换工作中;S103. The attacker performs dynamic adjustments to carry out covert label inference attacks; in a certain round of training, given a set of samples T, a set of samples with labels is inferred through label inference attacks, and the one with the smallest return gradient is selected sample, ie For the next replacement work;
(2)后门攻击步骤,该步骤包括以下内容:(2) backdoor attack step, this step includes the following content:
S201、通过标签推理攻击获得数据的标签信息;S201. Obtain label information of the data through a label inference attack;
S202、设计触发器;S202, designing a trigger;
S203、将触发器加到本批次具有目标标签的中间结果上,同时采用随机策略增强攻击效果;S203. Add the trigger to the batch of intermediate results with the target label, and at the same time use a random strategy to enhance the attack effect;
S204、将毒化数据与其他正常数据混合,传送给服务器;S204. Mix the poisoned data with other normal data, and transmit it to the server;
S205、调整学习率,利用服务器传回的梯度信息更新模型;判断是否达到后门攻击要求的训练轮数,达到了终止后门攻击,未达到继续S203-S205。S205 , adjust the learning rate, update the model by using the gradient information sent back by the server; judge whether the number of training rounds required by the backdoor attack is reached, and terminate the backdoor attack if it is reached, and continue S203-S205 if not.
上述的方法,可选的,步骤S101中攻击者作为一个恶意的客户端参与纵向联邦学习的训练过程的具体步骤如下:In the above method, optionally, in step S101, the specific steps for the attacker to participate in the training process of vertical federated learning as a malicious client are as follows:
其中,每个客户端将本地数据传入自己所持有的局部模型中,来获得局部模型输出的中间结果,并将中间结果传送给服务器;服务器对收到的中间结果进行数据聚合,并将聚合后的结果传入服务器端模型,得到模型最终的预测结果;服务器进行损失函数的计算,并进行反向传播,将得到的梯度信息传递给相应的客户端;各个客户端根据收到的梯度信息对自己所持有的局部模型进行模型更新;S101中的训练步骤重复数轮,直到模型逐渐收敛。Among them, each client transfers local data into the local model held by itself to obtain the intermediate results output by the local model, and transmits the intermediate results to the server; the server performs data aggregation on the received intermediate results, and The aggregated results are transmitted to the server-side model to obtain the final prediction result of the model; the server calculates the loss function, and performs backpropagation, and transmits the obtained gradient information to the corresponding client; The information updates the local model held by itself; the training step in S101 is repeated for several rounds until the model gradually converges.
上述的方法,可选的,步骤S102中攻击者通过中间结果替换方法来窃取数据的标签信息的具体步骤如下:In the above-mentioned method, optionally, in step S102, the specific steps for the attacker to steal the label information of the data through the intermediate result replacement method are as follows:
在正常的训练过程中,攻击者向服务器传送正确的中间结果并且得到了服务器传回来的梯度信息/>其中fa是攻击者所持有的局部模型,/>是攻击者所拥有的未知标签的本地数据,yi是该样本的标签,但是对于攻击者来说yi是未知的;在下一次攻击者将再次传送相同的数据/>给服务器时,攻击者会首先进行筛选工作,来确定未知标签yi是否可能是攻击者所感兴趣的标签yt。During normal training, the attacker transmits the correct intermediate results to the server And got the gradient information sent back by the server/> where f a is the local model held by the attacker, /> is the local data of unknown label owned by the attacker, y i is the label of this sample, but y i is unknown to the attacker; next time the attacker will transmit the same data again /> When sending to the server, the attacker will first perform screening work to determine whether the unknown label y i may be the label y t that the attacker is interested in.
上述的方法,可选的,判断的标签是否为目标标签yt的标准为:The method above, optional, judges The criteria for whether the label of is the target label y t is:
其中||·||2表示L2范数,θ和μ是两个阈值参数。where ||·|| 2 denotes the L2 norm, and θ and μ are two threshold parameters.
上述的方法,可选的,步骤S102还包括:The above method, optionally, step S102 also includes:
利用目前所有已知标签的数据来训练一个二值分类器H,具有目标标签yt的数据作为正样本,不具有目标标签yt的数据/>作为负样本;其中,/>是攻击者已知的具有目标标签yt的本地数据,/>是攻击者已知的不具有目标标签的本地数据。Use all known label data to train a binary classifier H, with target label y t data As a positive sample, data that does not have the target label y t /> As a negative sample; where, /> is local data known to the attacker with target label y t , /> is local data known to the attacker that does not have the target label.
上述的方法,可选的,步骤S102还包括:The above method, optionally, step S102 also includes:
攻击者将同一批次内的所有样本输入二值分类器进行标签预测,选择给出的预测结果最高的前n个样本进行接下来的中间结果替换工作,并利用最后的标签推断结果,将这些样本也用于二值分类器的训练工作中。The attacker inputs all samples in the same batch into the binary classifier for label prediction, selects the top n samples with the highest prediction results for the next intermediate result replacement, and uses the final label inference results to convert these The samples are also used in the training job of the binary classifier.
上述的方法,可选的,步骤S102还包括:The above method, optionally, step S102 also includes:
对于筛选工作选出来的样本,攻击者将用其他已知标签的中间结果 替换/>来传送给服务器,并获得服务器所传回的梯度信息/>其中yt是攻击者所感兴趣的标签信息,而/>是攻击者已知的具有标签yt的本地数据。For the samples selected by the screening work, the attacker will use other intermediate results with known labels replace /> to send to the server, and obtain the gradient information returned by the server/> where y t is the tag information that the attacker is interested in, and /> is local data with label y t known to the attacker.
上述的方法,可选的,步骤S202中设计触发器的具体步骤如下:In the above method, optionally, the specific steps of designing the trigger in step S202 are as follows:
使用叠加式触发器E,并将数据毒化过程定义为:Use a superimposed trigger E, and define the data poisoning process as:
其中⊕表示Element-wise加法,而触发器E可以表达为:where ⊕ represents Element-wise addition, and trigger E can be expressed as:
其中,M为触发掩码,触发区域值为1,其他区域值为0,是Element-wise的乘法,β是控制触发幅度的参数,Δ=[+δ+δ,-δ,-δ,···,+δ+Among them, M is the trigger mask, the value of the trigger area is 1, and the value of other areas is 0, It is the multiplication of Element-wise, β is the parameter to control the trigger amplitude, Δ=[+δ+δ,-δ,-δ,···,+δ+
δ,-δ,-δ]。δ, -δ, -δ].
上述的方法,可选的,步骤S203中采用如下两种随机策略增强攻击效果:In the above method, optionally, the following two random strategies are used in step S203 to enhance the attack effect:
随机策略Dropout和随机策略Shifing。Random strategy Dropout and random strategy Shifing.
经由上述的技术方案可知,与现有技术相比,本发明公开提供了一种用于纵向联邦学习的数据隐私与模型安全测试方法,具有以下有益效果:It can be seen from the above technical solutions that, compared with the prior art, the disclosure of the present invention provides a data privacy and model security testing method for longitudinal federated learning, which has the following beneficial effects:
(1)通过实施针对于纵向联邦学习的标签推理攻击、后门攻击,来测试纵向联邦学习算法及相应的防御方法在数据隐私和模型安全方面的防御能力;(1) Test the defense capabilities of the vertical federated learning algorithm and corresponding defense methods in terms of data privacy and model security by implementing label inference attacks and backdoor attacks for vertical federated learning;
(2)该测试工具包括两个模块,即标签推理攻击模块和后门攻击模块,来测试恶意参与者是否可以在当前纵向联邦学习算法及防御方法下成功窃取标签信息并且为模型植入后门,来实现恶意攻击;(2) The test tool includes two modules, namely the label inference attack module and the backdoor attack module, to test whether the malicious participant can successfully steal the label information and implant the backdoor for the model under the current longitudinal federated learning algorithm and defense method. to achieve malicious attacks;
(3)标签推理攻击模块通过实施我们所设计的简单、高效的中间结果替换方法来测试纵向联邦学习算法及相应的防御方法对于标签信息的隐私保护能力。后门攻击模块通过设计触发器、后门增强、调节学习率来实现隐蔽、高效的后门攻击方法,来测试纵向联邦学习算法及相应的防御方法的模型安全风险。(3) The label reasoning attack module tests the privacy protection ability of the vertical federated learning algorithm and corresponding defense methods for label information by implementing the simple and efficient intermediate result replacement method we designed. The backdoor attack module implements a concealed and efficient backdoor attack method by designing triggers, backdoor enhancements, and adjusting the learning rate to test the model security risks of the vertical federated learning algorithm and corresponding defense methods.
附图说明Description of drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present invention or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only It is an embodiment of the present invention, and those skilled in the art can also obtain other drawings according to the provided drawings without creative work.
图1为本发明测试纵向联邦学习算计及相应防御方法的隐私保护和模型安全能力的示例;Fig. 1 is an example of the privacy protection and model security capabilities of the present invention testing vertical federated learning calculations and corresponding defense methods;
图2为本发明的标签推理攻击模块的工作流程示意图;Fig. 2 is a schematic diagram of the workflow of the tag inference attack module of the present invention;
图3为本发明的后门攻击模块的工作流程示意图。FIG. 3 is a schematic diagram of the workflow of the backdoor attack module of the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.
参照图1所示,一种用于纵向联邦学习的数据隐私与模型安全测试方法,包括以下步骤:Referring to Figure 1, a data privacy and model security testing method for longitudinal federated learning includes the following steps:
(1)标签推理攻击步骤,该步骤包括以下内容:(1) Label inference attack step, which includes the following:
S101、攻击者作为一个恶意的客户端参与纵向联邦学习的训练过程;S101, the attacker participates in the training process of vertical federated learning as a malicious client;
S102、攻击者通过中间结果替换方法来窃取数据的标签信息;S102, the attacker steals the label information of the data through the intermediate result replacement method;
S103、攻击者进行动态调整以进行隐蔽的标签推理攻击;在某一轮训练中,给定一组样本T,是通过标签推理攻击推断其具有标签的一组样本,选择其中具有最小返回梯度的样本,即用于接下来的替换工作中;S103. The attacker performs dynamic adjustments to carry out covert label inference attacks; in a certain round of training, given a set of samples T, a set of samples with labels is inferred through label inference attacks, and the one with the smallest return gradient is selected sample, ie For the next replacement work;
(2)后门攻击步骤,该步骤包括以下内容:(2) backdoor attack step, this step includes the following content:
S201、通过标签推理攻击获得数据的标签信息;S201. Obtain label information of the data through a label inference attack;
S202、设计触发器;S202, designing a trigger;
S203、将触发器加到本批次具有目标标签的中间结果上,同时采用随机策略增强攻击效果;S203. Add the trigger to the batch of intermediate results with the target label, and at the same time use a random strategy to enhance the attack effect;
S204、将毒化数据与其他正常数据混合,传送给服务器;S204. Mix the poisoned data with other normal data, and transmit it to the server;
S205、调整学习率,利用服务器传回的梯度信息更新模型;判断是否达到后门攻击要求的训练轮数,达到了终止后门攻击,未达到继续S203-S205。S205 , adjust the learning rate, update the model by using the gradient information sent back by the server; judge whether the number of training rounds required by the backdoor attack is reached, and terminate the backdoor attack if it is reached, and continue S203-S205 if not.
进一步的,步骤S101中攻击者作为一个恶意的客户端参与纵向联邦学习的训练过程的具体步骤如下:Further, in step S101, the specific steps for the attacker to participate in the training process of vertical federated learning as a malicious client are as follows:
其中,每个客户端将本地数据传入自己所持有的局部模型中,来获得局部模型输出的中间结果,并将中间结果传送给服务器;服务器对收到的中间结果进行数据聚合,并将聚合后的结果传入服务器端模型,得到模型最终的预测结果;服务器进行损失函数的计算,并进行反向传播,将得到的梯度信息传递给相应的客户端;各个客户端根据收到的梯度信息对自己所持有的局部模型进行模型更新;S101中的训练步骤重复数轮,直到模型逐渐收敛。Among them, each client transfers local data into the local model held by itself to obtain the intermediate results output by the local model, and transmits the intermediate results to the server; the server performs data aggregation on the received intermediate results, and The aggregated results are transmitted to the server-side model to obtain the final prediction result of the model; the server calculates the loss function, and performs backpropagation, and transmits the obtained gradient information to the corresponding client; The information updates the local model held by itself; the training step in S101 is repeated for several rounds until the model gradually converges.
更进一步的,步骤S102中攻击者通过中间结果替换方法来窃取数据的标签信息的具体步骤如下:Furthermore, in step S102, the specific steps for the attacker to steal the label information of the data through the intermediate result replacement method are as follows:
在正常的训练过程中,攻击者向服务器传送正确的中间结果并且得到了服务器传回来的梯度信息/>其中fa是攻击者所持有的局部模型,/>是攻击者所拥有的未知标签的本地数据,yi是该样本的标签,但是对于攻击者来说yi是未知的;在下一次攻击者将再次传送相同的数据/>给服务器时,攻击者会首先进行筛选工作,来确定未知标签yi是否可能是攻击者所感兴趣的标签yt。During normal training, the attacker transmits the correct intermediate results to the server And got the gradient information sent back by the server/> where f a is the local model held by the attacker, /> is the local data of unknown label owned by the attacker, y i is the label of this sample, but y i is unknown to the attacker; next time the attacker will transmit the same data again /> When sending to the server, the attacker will first perform screening work to determine whether the unknown label y i may be the label y t that the attacker is interested in.
进一步的,判断的标签是否为目标标签yt的标准为:Further, judge The criteria for whether the label of is the target label y t is:
其中||·||2表示L2范数,θ和μ是两个阈值参数。where ||·|| 2 denotes the L2 norm, and θ and μ are two threshold parameters.
更进一步的,步骤S102还包括:Furthermore, step S102 also includes:
利用目前所有已知标签的数据来训练一个二值分类器H,具有目标标签yt的数据作为正样本,不具有目标标签yt的数据/>作为负样本。其中,/>是攻击者已知的具有目标标签yt的本地数据,/>是攻击者已知的不具有目标标签的本地数据。Use all known label data to train a binary classifier H, with target label y t data As a positive sample, data that does not have the target label y t /> as a negative sample. where, /> is local data known to the attacker with target label y t , /> is local data known to the attacker that does not have the target label.
再进一步的,步骤S102还包括:Still further, step S102 also includes:
攻击者将同一批次内的所有样本输入二值分类器进行标签预测,选择给出的预测结果最高的前n个样本进行接下来的中间结果替换工作,并利用最后的标签推断结果,将这些样本也用于二值分类器的训练工作中。The attacker inputs all samples in the same batch into the binary classifier for label prediction, selects the top n samples with the highest prediction results for the next intermediate result replacement, and uses the final label inference results to convert these The samples are also used in the training job of the binary classifier.
进一步的,步骤S102还包括:Further, step S102 also includes:
对于筛选工作选出来的样本,攻击者将用其他已知标签的中间结果 替换/>来传送给服务器,并获得服务器所传回的梯度信息/>其中yt是攻击者所感兴趣的标签信息,而/>是攻击者已知的具有标签yt的本地数据。For the samples selected by the screening work, the attacker will use other intermediate results with known labels replace /> to send to the server, and obtain the gradient information returned by the server/> where y t is the tag information that the attacker is interested in, and /> is local data with label y t known to the attacker.
更进一步的,步骤S202中设计触发器的具体步骤如下:Furthermore, the specific steps of designing the trigger in step S202 are as follows:
使用叠加式触发器E,并将数据毒化过程定义为:Use a superimposed trigger E, and define the data poisoning process as:
其中⊕表示Element-wise加法,而触发器E可以表达为:where ⊕ represents Element-wise addition, and trigger E can be expressed as:
其中,M为触发掩码,触发区域值为1,其他区域值为0,是Element-wise的乘法,β是控制触发幅度的参数,Δ=[+δ+δ,-δ,-δ,···,+δ+Among them, M is the trigger mask, the value of the trigger area is 1, and the value of other areas is 0, It is the multiplication of Element-wise, β is the parameter to control the trigger amplitude, Δ=[+δ+δ,-δ,-δ,···,+δ+
δ,-δ,-δ]。δ, -δ, -δ].
进一步的,步骤S203中采用如下两种随机策略增强攻击效果:Further, in step S203, the following two random strategies are adopted to enhance the attack effect:
随机策略Dropout和随机策略Shifing。Random strategy Dropout and random strategy Shifing.
参照图2所示,标签推理攻击模块,该模块包括以下步骤:Referring to Figure 2, the label inference attack module includes the following steps:
S101、攻击者作为一个恶意的客户端参与纵向联邦学习的训练过程:S101. The attacker participates in the training process of vertical federated learning as a malicious client:
其中,每个客户端将本地数据传入自己所持有的局部模型中,来获得局部模型输出的中间结果,并将中间结果传送给服务器;服务器对收到的中间结果进行数据聚合,并将聚合后的结果传入服务器端模型,得到模型最终的预测结果;服务器进行损失函数的计算,并进行反向传播,将得到的梯度信息传递给相应的客户端;各个客户端根据收到的梯度信息对自己所持有的局部模型进行模型更新;以上步骤重复数轮,直到模型逐渐收敛。Among them, each client transfers local data into the local model held by itself to obtain the intermediate results output by the local model, and transmits the intermediate results to the server; the server performs data aggregation on the received intermediate results, and The aggregated results are transmitted to the server-side model to obtain the final prediction result of the model; the server calculates the loss function, and performs backpropagation, and transmits the obtained gradient information to the corresponding client; The information updates the local model held by itself; the above steps are repeated for several rounds until the model gradually converges.
S102、攻击者进行中间结果替换方法来窃取数据的标签信息,具体包含了以下步骤:S102. The attacker performs an intermediate result replacement method to steal the label information of the data, which specifically includes the following steps:
在正常的训练过程中,攻击者向服务器传送正确的中间结果并且得到了服务器传回来的梯度信息/>其中fa是攻击者所持有的局部模型,/>是攻击者所拥有的未知标签的本地数据,yi是该样本的标签,但是对于攻击者来说yi是未知的。During normal training, the attacker transmits the correct intermediate results to the server And got the gradient information sent back by the server/> where f a is the local model held by the attacker, /> is the local data of the unknown label owned by the attacker, and y i is the label of the sample, but y i is unknown to the attacker.
在下一次攻击者将再次传送相同的数据给服务器时,攻击者会首先进行筛选工作,来确定未知标签yi是否可能是攻击者所感兴趣的标签yt,具体包括了以下步骤:At the next time the attacker will send the same data again When sending to the server, the attacker will first perform screening work to determine whether the unknown label y i may be the label y t that the attacker is interested in, which specifically includes the following steps:
(1)利用目前所有已知标签的数据来训练一个二值分类器H,具有目标标签yt的数据作为正样本,不具有目标标签yt的数据/>作为负样本。其中,/>是攻击者已知的具有目标标签yt的本地数据,/>是攻击者已知的不具有目标标签的本地数据。(1) Use the data of all known labels to train a binary classifier H, the data with the target label y t As a positive sample, data that does not have the target label y t /> as a negative sample. where, /> is local data known to the attacker with target label y t , /> is local data known to the attacker that does not have the target label.
(2)攻击者将同一批次内的所有样本输入二值分类器H进行标签预测,选择H给出的预测结果最高的前n个样本进行接下来的中间结果替换工作,并利用最后的标签推断结果,将这些样本也用于二值分类器H的训练工作中。(2) The attacker inputs all samples in the same batch to the binary classifier H for label prediction, selects the top n samples with the highest prediction results given by H to perform the next intermediate result replacement work, and uses the final label Inferring the results, these samples are also used in the training work of the binary classifier H.
对于筛选工作选出来的样本,攻击者将用其他已知标签的中间结果 替换/>来传送给服务器,并获得服务器所传回的梯度信息/>其中yt是攻击者所感兴趣的标签信息,也就是说攻击者想要知道哪些数据具有标签yt,而/>是攻击者已知的具有标签yt的本地数据。For the samples selected by the screening work, the attacker will use other intermediate results with known labels replace /> to send to the server, and obtain the gradient information returned by the server/> where y t is the label information that the attacker is interested in, that is to say, the attacker wants to know which data has the label y t , and /> is local data with label y t known to the attacker.
对比替换前后的梯度变化,通过公式(1)(2)来确定的标签是否为目标标签yt:Comparing the gradient changes before and after the replacement, it is determined by the formula (1) (2) Is the label of is the target label y t :
其中||·||2表示L2范数,θ和μ是两个阈值参数。where ||·|| 2 denotes the L2 norm, and θ and μ are two threshold parameters.
S103、为了避免重复大量地使用静态的进行标签交换而导致服务器发出警报,攻击者将进行动态调整以进行隐蔽的标签推理攻击。在某一轮训练中,给定一组样本T,是通过标签推理攻击推断其具有标签yt的一组样本,选择其中具有最小返回梯度的样本,即用于接下来的替换工作中。S103. In order to avoid repeated use of static With label swapping resulting in an alert from the server, the attacker will make dynamic adjustments for covert label inference attacks. In a certain round of training, given a set of samples T, it is a set of samples inferred to have the label y t through the label inference attack, and the sample with the smallest return gradient is selected, that is, for subsequent replacement work.
参照图3所示,后门攻击模块,该模块包括以下步骤:Shown in Fig. 3 with reference to, backdoor attack module, this module comprises the following steps:
S201、构造触发器:使用叠加式触发器E,并将数据毒化过程定义为:S201. Construct a trigger: use a superimposed trigger E, and define the data poisoning process as:
其中⊕表示Element-wise加法,而触发器E可以表达为:where ⊕ represents Element-wise addition, and trigger E can be expressed as:
其中,M为触发掩码,触发区域值为1,其他区域值为0,是Element-wise的乘法,β是控制触发幅度的参数,Δ=[+δ+δ,-δ,-δ,···,+δ+Among them, M is the trigger mask, the value of the trigger area is 1, and the value of other areas is 0, It is the multiplication of Element-wise, β is the parameter to control the trigger amplitude, Δ=[+δ+δ,-δ,-δ,···,+δ+
δ,-δ,-δ](每两个正值后面跟着两个负值,每两个负值后门跟着两个正值,直到终止)。需要注意的是,这里是直接将触发器添加到中间结果上,而不是原始数据样本。因为攻击者作为纵向联邦学习的客户端参与者只需将中间结果上传到服务器,攻击者将触发器叠加到中间结果上,可以实现更加有效的后门攻击。δ, -δ, -δ] (every two positive values are followed by two negative values, and every two negative values are followed by two positive values until termination). It should be noted that here is to directly add the trigger to the intermediate result on, rather than the original data sample. Because the attacker, as a client participant of vertical federated learning, only needs to upload the intermediate result to the server, and the attacker can superimpose the trigger on the intermediate result, which can realize a more effective backdoor attack.
S202、纵向联邦学习中的后门注入比普通后门更难,因为攻击者无法控制来自其他良性参与者的中间结果。为了增强后门注入,本发明在训练期间向毒化数据引入随机性,以提高纵向联邦学习模型测试期间后门攻击的性能。因此,本发明采用了两种随机化策略用于后门增强。S202. Backdoor injection in vertical federated learning is more difficult than ordinary backdoors, because attackers cannot control intermediate results from other benign participants. To enhance backdoor injection, the present invention introduces randomness into poisoned data during training to improve the performance of backdoor attacks during testing of longitudinal federated learning models. Therefore, the present invention adopts two randomization strategies for backdoor enhancement.
(1)第一种随机策略Dropout:受常用于缓解过拟合问题的dropout方法的启发,在植入后门的过程中,攻击者每次都会随机置零触发器掩码中的部分元素。评估表明,Dropout策略使本后门攻击方法对基于触发消除的后门防御更加健壮。(1) The first random strategy Dropout: Inspired by the dropout method commonly used to alleviate the overfitting problem, in the process of implanting the backdoor, the attacker will randomly set some elements in the trigger mask to zero each time. The evaluation shows that the dropout strategy makes the backdoor attack method more robust to the trigger elimination-based backdoor defense.
(2)第二种随机策略Shifing:将触发器掩码M随机乘以均匀分布在 范围内的随机数γ,以略微改变触发幅度。其中,/>是指随机数γ的下限值,γ是指随机数γ的上限值。(2) The second random strategy Shifing: Randomly multiply the trigger mask M by A random number γ in the range to vary the trigger amplitude slightly. where, /> is the lower limit value of the random number γ, and γ is the upper limit value of the random number γ.
S203、在利用梯度信息进行更新模型参数的过程中,攻击者会适当提高自己所拥有的局部模型的学习率,改变局部模型的收敛速度,从而增强攻击者所持有的部分模型对最终分类结果的影响,进一步增强毒化数据对服务器端模型的影响。S203. In the process of using gradient information to update model parameters, the attacker will appropriately increase the learning rate of the local model owned by him, and change the convergence speed of the local model, thereby enhancing the impact of the partial model held by the attacker on the final classification result. The impact of poisoned data further enhances the impact of poisoned data on the server-side model.
下面以针对两个客户模型和一个服务器模型参与纵向联邦学习的场景为例,具体阐述标签推理攻击和后门攻击的实施过程,并在MNIST、CIFAR10、ImageNette、CINIC-10几个图像数据集和Bank Marketing(BM)、Give-Me-Some-Credit(GM)表格数据集(结构化数据集)中进行测试,各个数据集选择的对应模型如下表1:Taking the scenario where two client models and one server model participate in vertical federated learning as an example, the implementation process of label inference attack and backdoor attack is explained in detail, and several image data sets of MNIST, CIFAR10, ImageNette, CINIC-10 and Bank Marketing (BM), Give-Me-Some-Credit (GM) tabular data sets (structured data sets) are tested, and the corresponding models selected for each data set are shown in Table 1:
表1Table 1
具体实施包括以下步骤:The specific implementation includes the following steps:
标签推理攻击模块,该模块包括以下步骤:Label reasoning attack module, which includes the following steps:
对于MNIST、CIFAR-10、ImageNette、CINIC-10、BM和GM,每个训练批次的样本数量分别为128、128、50、64、100、1000,选择的嵌入交换数量分别为n=14、14、10、8、6、40。标签推理攻击中的阈值μ被设置为梯度L2范数的平均值,因为错误分类样本的梯度L2范数通常大于平均值。在实施中发现,μ=0.01适用于不同的数据集,具体实施中θ取5,可调,如果θ选择的更小,目标标签样本的选择就会更严格,但目标标签样本可能会被忽略。For MNIST, CIFAR-10, ImageNette, CINIC-10, BM, and GM, the number of samples per training batch is 128, 128, 50, 64, 100, 1000, respectively, and the number of embedding exchanges chosen is n = 14, 14, 10, 8, 6, 40. The threshold μ in the label inference attack is set as the average value of the gradient L2 norm, since the gradient L2 norm of misclassified samples is usually larger than the average value. In the implementation, it is found that μ=0.01 is suitable for different data sets. In the specific implementation, θ is 5, which is adjustable. If θ is selected to be smaller, the selection of target label samples will be stricter, but the target label samples may be ignored. .
S1、攻击者进行中间结果替换方法来窃取数据的标签信息,具体包含了以下步骤:S1. The attacker uses the intermediate result replacement method to steal the label information of the data, which specifically includes the following steps:
S101、在正常的训练过程中,攻击者向服务器传送正确的中间结果 并且得到了服务器传回来的梯度信息/>其中fa是攻击者所持有的局部模型,/>是攻击者所拥有的未知标签的本地数据,yi是该样本的标签,但是对于攻击者来说yi是未知的。S101. In the normal training process, the attacker sends the correct intermediate result to the server And got the gradient information sent back by the server/> where f a is the local model held by the attacker, /> is the local data of the unknown label owned by the attacker, and y i is the label of the sample, but y i is unknown to the attacker.
S102、在下一次攻击者将再次传送相同的数据给服务器时,攻击者会首先进行筛选工作,来确定未知标签yi是否可能是攻击者所感兴趣的标签yt,具体包括了以下步骤:S102, the attacker will transmit the same data again next time When sending to the server, the attacker will first perform screening work to determine whether the unknown label y i may be the label y t that the attacker is interested in, which specifically includes the following steps:
(1)利用目前所有已知标签的数据来训练一个二值分类器H,具有目标标签yt的数据作为正样本,不具有目标标签yt的数据/>作为负样本。其中,/>是攻击者已知的具有目标标签yt的本地数据,/>是攻击者已知的不具有目标标签的本地数据。(1) Use the data of all known labels to train a binary classifier H, the data with the target label y t As a positive sample, data that does not have the target label y t /> as a negative sample. where, /> is local data known to the attacker with target label y t , /> is local data known to the attacker that does not have the target label.
(2)攻击者将同一批次内的所有样本输入二值分类器H进行标签预测,选择H给出的预测结果最高的前n个样本进行接下来的中间结果替换工作,并利用最后的标签推断结果,将这些样本也用于二值分类器H的训练工作中。(2) The attacker inputs all samples in the same batch to the binary classifier H for label prediction, selects the top n samples with the highest prediction results given by H to perform the next intermediate result replacement work, and uses the final label Inferring the results, these samples are also used in the training work of the binary classifier H.
S103、对于筛选工作选出来的样本,攻击者将用其他已知标签的中间结果替换/>来传送给服务器,并获得服务器所传回的梯度信息/>其中yt是攻击者所感兴趣的标签信息,也就是说攻击者想要知道哪些数据具有标签yt,而/>是攻击者已知的具有标签yt的本地数据。S103. For the samples selected by the screening work, the attacker will use the intermediate results of other known labels replace /> to send to the server, and obtain the gradient information returned by the server/> where y t is the label information that the attacker is interested in, that is to say, the attacker wants to know which data has the label y t , and /> is local data with label y t known to the attacker.
S104、对比替换前后的梯度变化,通过公式(1)(2)来确定的标签是否为目标标签yt:S104, compare the gradient changes before and after replacement, and determine by formula (1)(2) Is the label of is the target label y t :
其中||·||2表示L2范数,θ和μ是两个阈值参数。where ||·|| 2 denotes the L2 norm, and θ and μ are two threshold parameters.
S2、为了避免重复大量地使用静态的进行标签交换而导致服务器发出警报,攻击者将进行动态调整以进行隐蔽的标签推理攻击。在某一轮训练中,给定一组样本T,是通过标签推理攻击推断其具有标签yt的一组样本,选择其中具有最小返回梯度的样本,即用于接下来的替换工作中。S2. In order to avoid repeated use of static With label swapping resulting in an alert from the server, the attacker will make dynamic adjustments for covert label inference attacks. In a certain round of training, given a set of samples T, it is a set of samples inferred to have the label y t through the label inference attack, and the sample with the smallest return gradient is selected, that is, for subsequent replacement work.
根据以上步骤,本发明在各个数据集上进行测试,对他们进行标签推断,结果如下表2:According to the above steps, the present invention tests on various data sets, and infers labels on them. The results are shown in Table 2 below:
表2Table 2
可以看到,本发明提出的标签推理算法在每个数据集上都达到了很高的精度。在MNIST、CIFAR-10、CINIC-10、BM和GM上,标签推理攻击准确率均大于96%。在更复杂的数据集ImageNette上,攻击精度略有下降,但仍高于92%。此外,本发明提出的标签推理攻击算法非常省时,因为它只需要一轮训练(几分钟)来推断标签,而以往的标签推理攻击则需要更多的时间来训练半监督模型,才能进行后续的推断任务。It can be seen that the label inference algorithm proposed by the present invention has achieved high accuracy on each data set. On MNIST, CIFAR-10, CINIC-10, BM and GM, the label inference attack accuracy is greater than 96%. On the more complex dataset ImageNette, the attack accuracy drops slightly, but is still above 92%. In addition, the label inference attack algorithm proposed in the present invention is very time-saving because it only needs one round of training (a few minutes) to infer labels, while previous label inference attacks require more time to train a semi-supervised model for subsequent inference task.
后门攻击模块,该模块包括以下步骤:Backdoor attack module, the module includes the following steps:
对于所有数据集,毒化率均为1%,默认值β是0.4,以保持触发器的隐蔽性,将γ设为0.6,设为1.2。选择对所有标签进行攻击,以测试整体的有效性,另外所使用的dropout ratio是0.75。For all datasets, the poisoning rate is 1%, the default value of β is 0.4 to keep the trigger hidden, and γ is set to 0.6, Set to 1.2. Choose to attack all tags to test the overall effectiveness, and the dropout ratio used is 0.75.
S1、构造触发器:使用叠加式触发器E,并将数据毒化过程定义为:S1. Construct a trigger: use a superimposed trigger E, and define the data poisoning process as:
其中⊕表示Element-wise加法,而触发器E可以表达为:where ⊕ represents Element-wise addition, and trigger E can be expressed as:
其中,M为触发掩码,触发区域值为1,其他区域值为0,是Element-wise的乘法,β是控制触发幅度的参数,Δ=[+δ+δ,-δ,-δ,···,+δ+Among them, M is the trigger mask, the value of the trigger area is 1, and the value of other areas is 0, It is the multiplication of Element-wise, β is the parameter to control the trigger amplitude, Δ=[+δ+δ,-δ,-δ,···,+δ+
δ,-δ,-δ](每两个正值后面跟着两个负值,每两个负值后门跟着两个正值,直到终止)。需要注意的是,这里是直接将触发器添加到中间结果上,而不是原始数据样本。因为攻击者作为纵向联邦学习的客户端参与者只需将中间结果上传到服务器,攻击者将触发器叠加到中间结果上,可以实现更加有效的后门攻击。δ, -δ, -δ] (every two positive values are followed by two negative values, and every two negative values are followed by two positive values until termination). It should be noted that here is to directly add the trigger to the intermediate result on, rather than the original data sample. Because the attacker, as a client participant of vertical federated learning, only needs to upload the intermediate result to the server, and the attacker can superimpose the trigger on the intermediate result, which can realize a more effective backdoor attack.
S2、为了增强后门注入,攻击者在训练期间向毒化数据引入随机性,以提高纵向联邦学习模型测试期间后门攻击的性能。因此,攻击者采用了两种随机化策略用于后门增强。S2. To enhance the backdoor injection, the attacker introduces randomness to the poisoned data during training to improve the performance of the backdoor attack during the testing of the longitudinal federated learning model. Therefore, the attacker employs two randomization strategies for backdoor enhancement.
S201、第一种随机策略Dropout:受常用于缓解过拟合问题的dropout方法的启发,在植入后门的过程中,攻击者每次都会随机置零触发器掩码中的部分元素。评估表明,Dropout策略使本后门攻击对基于消除触发器的后门防御更加健壮。S201. The first random strategy Dropout: Inspired by the dropout method commonly used to alleviate the overfitting problem, in the process of implanting the backdoor, the attacker randomly resets some elements in the trigger mask to zero each time. The evaluation shows that the Dropout strategy makes this backdoor attack more robust to the elimination trigger-based backdoor defense.
S202、第二种随机策略Shifing:将触发器掩码M随机乘以均匀分布在 范围内的随机数γ,以略微改变触发幅度。其中,/>是指随机数γ的下限值,γ是指随机数γ的上限值。S202, the second random strategy Shifing: Randomly multiply the trigger mask M by A random number γ in the range to vary the trigger amplitude slightly. where, /> is the lower limit value of the random number γ, and γ is the upper limit value of the random number γ.
S3、在利用梯度信息进行更新模型参数的过程中,攻击者会适当提高自己所拥有的局部模型的学习率,改变局部模型的收敛速度,从而增强攻击者所持有的部分模型对最终分类结果的影响,进一步增强毒化数据对服务器端模型的影响。S3. In the process of using gradient information to update model parameters, the attacker will appropriately increase the learning rate of the local model he owns, and change the convergence speed of the local model, thereby enhancing the impact of some models held by the attacker on the final classification result. The impact of poisoned data further enhances the impact of poisoned data on the server-side model.
本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。对于实施例公开的装置而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。Each embodiment in this specification is described in a progressive manner, each embodiment focuses on the difference from other embodiments, and the same and similar parts of each embodiment can be referred to each other. As for the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and for relevant details, please refer to the description of the method part.
对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本发明。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本发明的精神或范围的情况下,在其它实施例中实现。因此,本发明将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。The above description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be implemented in other embodiments without departing from the spirit or scope of the invention. Therefore, the present invention will not be limited to the embodiments shown herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (9)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310619376.XA CN116644433A (en) | 2023-05-29 | 2023-05-29 | Data privacy and model safety test method for longitudinal federal learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310619376.XA CN116644433A (en) | 2023-05-29 | 2023-05-29 | Data privacy and model safety test method for longitudinal federal learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116644433A true CN116644433A (en) | 2023-08-25 |
Family
ID=87643043
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310619376.XA Pending CN116644433A (en) | 2023-05-29 | 2023-05-29 | Data privacy and model safety test method for longitudinal federal learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116644433A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117150422A (en) * | 2023-11-01 | 2023-12-01 | 数据空间研究院 | Label inference attack method based on sample exchange in longitudinal federal learning system |
CN118366010A (en) * | 2024-06-18 | 2024-07-19 | 浙江大学 | A model backdoor attack vulnerability analysis method and system for segmentation learning |
-
2023
- 2023-05-29 CN CN202310619376.XA patent/CN116644433A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117150422A (en) * | 2023-11-01 | 2023-12-01 | 数据空间研究院 | Label inference attack method based on sample exchange in longitudinal federal learning system |
CN117150422B (en) * | 2023-11-01 | 2024-02-02 | 数据空间研究院 | Label inference attack method based on sample exchange in longitudinal federal learning system |
CN118366010A (en) * | 2024-06-18 | 2024-07-19 | 浙江大学 | A model backdoor attack vulnerability analysis method and system for segmentation learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yang et al. | Federated continual learning via knowledge fusion: A survey | |
Shen et al. | Auror: Defending against poisoning attacks in collaborative deep learning systems | |
CN116644433A (en) | Data privacy and model safety test method for longitudinal federal learning | |
CN113128701A (en) | Sample sparsity-oriented federal learning method and system | |
Thein et al. | Personalized federated learning-based intrusion detection system: Poisoning attack and defense | |
Luo et al. | Privacy-preserving clustering federated learning for non-IID data | |
CN114764499A (en) | Sample poisoning attack resisting method for federal learning | |
CN112365005A (en) | Neuron distribution characteristic-based federal learning poisoning detection method | |
CN113779563A (en) | Method and device for defending against backdoor attack of federal learning | |
Yang et al. | Clean‐label poisoning attacks on federated learning for IoT | |
CN117807597A (en) | Robust personalized federal learning method facing back door attack | |
Wu et al. | Federated learning for tabular data: Exploring potential risk to privacy | |
WO2019080844A1 (en) | Data reasoning method and apparatus, and computer device | |
Zhou et al. | Novel defense schemes for artificial intelligence deployed in edge computing environment | |
Zeng et al. | TD-MDB: A truth-discovery-based multidimensional bidding strategy for federated learning in industrial IoT systems | |
CN115310625A (en) | Longitudinal federated learning reasoning attack defense method | |
Anwar et al. | A differential privacy aided DeepFed intrusion detection system for IoT applications | |
CN115687758A (en) | User classification model training method and user detection method | |
Liu et al. | Assessing membership leakages via task-aligned divergent shadow datasets in vehicular road cooperation | |
CN113850399B (en) | A method for federated learning membership inference based on prediction confidence sequence | |
CN119005248A (en) | Multi-strategy federal learning method suitable for defending DLG attacks | |
CN117611949A (en) | A picture data generation method for multi-user teacher-student training model | |
Wang et al. | Feature-based federated transfer learning: Communication efficiency, robustness and privacy | |
CN116227547A (en) | Federal learning model optimization method and device based on self-adaptive differential privacy | |
CN115187789A (en) | Adversarial image detection method and device based on activation difference of convolutional layers |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |