CN115809698A - Black box escape map injection attack method for map neural network - Google Patents
Black box escape map injection attack method for map neural network Download PDFInfo
- Publication number
- CN115809698A CN115809698A CN202211580567.1A CN202211580567A CN115809698A CN 115809698 A CN115809698 A CN 115809698A CN 202211580567 A CN202211580567 A CN 202211580567A CN 115809698 A CN115809698 A CN 115809698A
- Authority
- CN
- China
- Prior art keywords
- node
- sur
- nodes
- graph
- injection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000002347 injection Methods 0.000 title claims abstract description 25
- 239000007924 injection Substances 0.000 title claims abstract description 25
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 17
- 238000000034 method Methods 0.000 title claims abstract description 15
- 238000003062 neural network model Methods 0.000 claims abstract description 10
- 238000012549 training Methods 0.000 claims abstract description 4
- 239000011159 matrix material Substances 0.000 claims description 6
- 238000005457 optimization Methods 0.000 claims description 5
- 238000012360 testing method Methods 0.000 claims description 5
- 239000003795 chemical substances by application Substances 0.000 claims description 4
- 238000000638 solvent extraction Methods 0.000 claims 1
- 235000000332 black box Nutrition 0.000 abstract description 6
- 239000000243 solution Substances 0.000 description 4
- 238000010276 construction Methods 0.000 description 2
- 238000013136 deep learning model Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000001174 ascending effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000007418 data mining Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012502 risk assessment Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Landscapes
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
本发明公开了一种针对图神经网络的黑盒逃逸图注入攻击方法,包括如下步骤:步骤一:获取代理图数据集Gsur;步骤二:训练代理模型fsur;步骤三:获取原始受害者图数据集G;步骤四:使用代理模型fsur为原始受害者图数据集G生成标签集L;步骤五:计算注入节点集Vinj的候选邻居节点集Vnei;步骤六:构建注入节点集Vinj中的节点与原始受害者图数据集G的拓扑连接;步骤七:自适应优化注入节点的特征Xinj;步骤八:攻击受害者模型fvit。这种方法可以保证攻击性能与攻击隐蔽性,提高攻击者在图神经网络模型的节点分类任务上的攻击性能。
The invention discloses a black-box escape graph injection attack method for graph neural network, which comprises the following steps: step 1: obtaining proxy graph data set G sur ; step 2: training proxy model f sur ; step 3: obtaining original victim Graph dataset G; Step 4: Use the proxy model f sur to generate a label set L for the original victim graph dataset G; Step 5: Calculate the candidate neighbor node set V nei of the injected node set V inj ; Step 6: Build the injected node set The nodes in V inj are topologically connected with the original victim graph dataset G; step seven: adaptively optimize the features X inj of the injected nodes; step eight: attack the victim model f vit . This method can guarantee the attack performance and attack concealment, and improve the attacker's attack performance on the node classification task of the graph neural network model.
Description
技术领域technical field
本发明涉及深度学习技术领域,具体是针对图神经网络的黑盒逃逸图注入攻击方法。The invention relates to the technical field of deep learning, in particular to a black-box escape graph injection attack method for a graph neural network.
背景技术Background technique
图是由节点及其连接关系组成的非欧氏数据结构,具有表示复杂耦合关系的能力。许多现实生活场景都可以用图结构来建模,比如社交平台,交通系统,推荐系统和财务风险评估系统等。深度学习模型具有很强的学习能力,并在自然语言处理和计算机视觉等领域得到了广泛研究,但是,由于图的特定非欧氏数据性质,传统的欧氏数据深度学习模型在处理图结构数据时表现不佳。因此,大量的工作都在致力于更好地开发图神经网络(Graph Neural Networks,简称GNNs)来有效地整合图数据的节点特征和拓扑结构信息,捕捉节点之间潜在的数据流。A graph is a non-Euclidean data structure composed of nodes and their connections, and has the ability to represent complex coupling relationships. Many real-life scenarios can be modeled with graph structures, such as social platforms, transportation systems, recommendation systems, and financial risk assessment systems, etc. The deep learning model has a strong learning ability and has been widely studied in the fields of natural language processing and computer vision. However, due to the specific non-Euclidean data nature of the graph, the traditional Euclidean data deep learning model is difficult to process graph-structured data. underperformed. Therefore, a lot of work is devoted to better developing Graph Neural Networks (GNNs for short) to effectively integrate the node characteristics and topology information of graph data, and capture the potential data flow between nodes.
尽管图神经网络已经取得了不错的进展,但它们很容易受到对抗性攻击的影响,即微小但精心设计的扰动可能导致图神经网络模型的性能大幅度下降。前人大量的工作致力于研究图修改攻击(Graph Modification Attacks,简称GMA),试图通过在图中的现有节点或拓扑中添加小扰动来误导图神经网络模型的预测,例如在不同类型的节点之间添加恶意边,或者篡改部分节点属性等等。然而,在许多现实场景中,修改原始图中的数据往往需要极高的操作权限,因此越来越多的人开始关注更符合显示场景的图注入攻击(GraphInjection Attack,简称GIA),即攻击者通过生成少量恶意节点并与原图中的节点建立连接来进行攻击。例如在社交网络中篡改已有用户的评论是需要较高权限的,而注册一个新的用户来发表恶意言论来引导舆论则相对简单。然而目前关于图注入攻击方法的研究仍处于初级阶段,攻击者通常难以同时兼顾攻击性能及其自身隐蔽性。Although graph neural networks have made good progress, they are vulnerable to adversarial attacks, that is, small but well-designed perturbations can lead to a large performance degradation of graph neural network models. A lot of previous work has been devoted to the study of Graph Modification Attacks (Graph Modification Attacks, referred to as GMA), trying to mislead the prediction of the graph neural network model by adding small perturbations to the existing nodes or topology in the graph, such as different types of nodes Add malicious edges between them, or tamper with some node attributes, etc. However, in many real-world scenarios, modifying the data in the original graph often requires extremely high operating authority, so more and more people are paying attention to the Graph Injection Attack (GIA for short), which is more suitable for display scenarios, that is, the attacker The attack is carried out by generating a small number of malicious nodes and establishing connections with nodes in the original graph. For example, tampering with existing users' comments in social networks requires higher authority, but it is relatively simple to register a new user to post malicious remarks to guide public opinion. However, the current research on graph injection attack methods is still in its infancy, and it is usually difficult for attackers to balance attack performance and its own concealment at the same time.
因此,需要针对图神经网络的黑盒逃逸图注入攻击方法,可以使得攻击者在测试阶段黑盒情况下根据图数据的拓扑和节点特征有针对性地注入恶意节点,同时在优化恶意节点特征的同时提高其隐蔽性,从而兼顾攻击者的破坏性与隐蔽性,提高攻击者在图神经网络模型的节点分类任务上的攻击性能。Therefore, there is a need for a black-box escape graph injection attack method for graph neural networks, which allows attackers to inject malicious nodes in a targeted manner according to the topology and node characteristics of graph data in the black-box test phase, while optimizing the characteristics of malicious nodes. At the same time, its concealment is improved, so as to take into account the destructiveness and concealment of the attacker, and improve the attack performance of the attacker on the node classification task of the graph neural network model.
发明内容Contents of the invention
本发明的目的是针对现有黑盒逃逸图注入攻击技术无法兼顾攻击者的破坏性与隐蔽性的不足,提供针对图神经网络的黑盒逃逸图注入攻击方法,这种方法可以保证攻击性能与攻击隐蔽性,提高攻击者在图神经网络模型的节点分类任务上的攻击性能。The purpose of the present invention is to provide a black box escape graph injection attack method for the graph neural network, which can guarantee attack performance and Attack concealment, improve the attacker's attack performance on the node classification task of the graph neural network model.
实现本发明目的的技术方案是:The technical scheme that realizes the object of the present invention is:
针对图神经网络的黑盒逃逸图注入攻击方法,包括如下步骤:A black-box escape graph injection attack method for a graph neural network includes the following steps:
步骤一:获取代理图数据集Gsur,从众包平台渠道获取代理图数据集Gsur=(Vsur,Esur,Xsur,Lsur),其中Vsur为节点集,表示图数据集Gsur中所有的节点,Esur为边集,表示节点间存在的所有边,Xsur为节点集Vsur的所有特征拼接而得的特征矩阵,Lsur为节点集Vsur中每个节点所对应的标签组成的标签集;Step 1: Obtain the proxy graph data set G sur , obtain the proxy graph data set G sur from the crowdsourcing platform channel = (V sur , E sur , X sur , L sur ), where V sur is a node set, representing the graph data set G sur All the nodes in , E sur is the edge set, indicating all the edges existing between nodes, X sur is the feature matrix obtained by concatenating all the features of the node set V sur , L sur is the node corresponding to each node in the node set V sur A label set composed of labels;
步骤二:训练代理模型fsur,选择一个有代表性的图神经网络模型作为代理模型fsur,然后在代理数据集Gsur训练fsur直到其收敛;Step 2: Train the proxy model f sur , select a representative graph neural network model as the proxy model f sur , and then train f sur on the proxy dataset G sur until it converges;
步骤三:获取原始受害者图数据集G,根据攻击目标从众包平台渠道中获取原始受害者图数据集G=(V,E,X,L),其中V为节点集,表示图数据集G中所有的节点,E为边集,表示节点间存在的所有边,X为节点集V的所有特征拼接而得的特征矩阵,L为节点集V中每个节点所对应的标签组成的标签集,由于黑盒设定下无法获取原始受害者图数据集G中的标签集L,故L在初始状态时为空集;Step 3: Obtain the original victim graph dataset G, and obtain the original victim graph dataset G=(V, E, X, L) from the crowdsourcing platform channel according to the attack target, where V is a node set, representing the graph dataset G All the nodes in , E is the edge set, which means all the edges between the nodes, X is the feature matrix obtained by concatenating all the features of the node set V, and L is the label set composed of the labels corresponding to each node in the node set V , because the label set L in the original victim map dataset G cannot be obtained under the black box setting, so L is an empty set in the initial state;
步骤四:使用代理模型fsur为原始受害者图数据集G生成标签集L,由于黑盒设定下无法获取原始受害者图数据集G中的标签集L,故需要使用已经收敛的代理模型fsur预测原始受害者图数据集G中所有节点的标签,并将fsur预测的标签作为标签集L;Step 4: Use the proxy model f sur to generate a label set L for the original victim graph dataset G. Since the label set L in the original victim graph dataset G cannot be obtained under the black box setting, it is necessary to use a converged proxy model f sur predicts the labels of all nodes in the original victim graph dataset G, and uses the labels predicted by f sur as the label set L;
步骤五:计算注入节点集Vinj的候选邻居节点集Vnei,根据原始受害者图数据集G中的拓扑结构和节点属性,计算节点的信息域即ID和分类间隔即CM,根据ID和CM计算最终的节点对抗脆弱性即AF),选择G中对抗脆弱性较大的节点作为注入节点的候选邻居节点集Vnei;Step 5: Calculate the candidate neighbor node set V nei of the injected node set V inj , according to the topology structure and node attributes in the original victim graph data set G, calculate the information domain of the node, namely ID and classification interval, namely CM, according to ID and CM Calculate the final node anti-vulnerability (AF), and select the node with greater anti-vulnerability in G as the candidate neighbor node set V nei of the injected node;
步骤六:构建注入节点集Vinj中的节点与原始受害者图数据集G的拓扑连接,在确定候选集Vnei之后,基于Vnei中节点的噪声类别和注入节点的边攻击预算Einj,进一步细粒度划分候选节点集Vnei,将相同噪声类的Einj个节点划分为一个簇,每个簇视为一个注入节点的候选邻居集,为了增强注入节点的隐蔽性,候选邻居节点集Vnei中的节点只能作为一个注入节点的候选邻居;Step 6: Construct the topological connection between the nodes in the injected node set V inj and the original victim graph data set G. After determining the candidate set V nei , based on the noise category of the nodes in V nei and the edge attack budget E inj of the injected nodes, Further fine-grained division of the candidate node set V nei , the E inj nodes of the same noise class are divided into a cluster, and each cluster is regarded as a candidate neighbor set of an injected node. In order to enhance the concealment of the injected node, the candidate neighbor node set V The nodes in nei can only be used as candidate neighbors for an injected node;
步骤七:自适应优化注入节点的特征Xinj,运用C&W损失优化注入节点的特征,并对优化后的注入节点特征进行限制,以保证注入节点特征不超出原始特征域的范围,注入节点的特征优化完成后,即得到包含注入节点的扰动图Gper;Step 7: Adaptively optimize the feature X inj of the injected node, use the C&W loss to optimize the feature of the injected node, and restrict the optimized feature of the injected node to ensure that the feature of the injected node does not exceed the scope of the original feature domain, and the features of the injected node After the optimization is completed, the perturbation graph G per including the injected nodes is obtained;
步骤八:攻击受害者模型fvit,在测试阶段且不改变受害者模型fvit的模型参数的情况下,受害者模型fvit将扰动图Gper作为输入并执行下游任务。Step 8: Attack the victim model f vit . In the test phase without changing the model parameters of the victim model f vit , the victim model f vit takes the perturbation graph G per as input and executes downstream tasks.
本技术方案的优点或有益效果:The advantages or beneficial effects of this technical solution:
1.本技术方案采用注入恶意节点的策略来攻击图神经网络,而不需要修改原图中已存在的节点属性或者拓扑特征,该攻击技术更贴近现实场景;1. This technical solution uses the strategy of injecting malicious nodes to attack the graph neural network without modifying the existing node attributes or topology features in the original graph. This attack technology is closer to the real scene;
2.本技术方案在注入节点的拓扑构建和特征优化过程中充分考虑了攻击的危害性和隐蔽性,在攻击具有一定防御能力的图神经网络模型时,攻击性能要明显优于现有的攻击基线模型。2. This technical solution fully considers the harmfulness and concealment of the attack in the topology construction and feature optimization process of the injection node. When attacking the graph neural network model with certain defensive capabilities, the attack performance is significantly better than the existing attack baseline model.
3.本技术方案通过计算节点的对抗脆弱性缩小了后续操作的搜索空间,节约了时间和空间的资源消耗,可以使得攻击运用于大规模图的场景下;3. This technical solution reduces the search space for subsequent operations through the anti-vulnerability of computing nodes, saves time and space resource consumption, and enables attacks to be applied to large-scale graph scenarios;
这种方法能够提高针对图神经网络的攻击隐蔽性并获得较好的攻击精度。This method can improve the attack concealment of the graph neural network and obtain better attack accuracy.
附图说明Description of drawings
图1为实施例的流程图。Fig. 1 is the flowchart of embodiment.
具体实施方式Detailed ways
下面结合附图及具体实施例对本发明作进一步的详细描述,但不是对本发明的限定。The present invention will be described in further detail below in conjunction with the accompanying drawings and specific embodiments, but the present invention is not limited thereto.
实施例:Example:
社交平台是信息化时代下人们接触最频繁的网络之一,由于社交平台的受众范围广和信息传播快等特点,社交平台中的信息对于社会舆论导向以及社会价值观的构建具有极大的影响力。基于图神经网络的社交平台可以利用用户特征信息和用户之间的关注关系构建社交平台用户关系网络,有效地整合社交平台数据的用户特征和关注关系,捕捉用户之间潜在的数据流。Social platforms are one of the most frequently contacted networks in the information age. Due to the wide audience and fast information dissemination of social platforms, the information on social platforms has a great influence on the guidance of public opinion and the construction of social values. . The social platform based on the graph neural network can use the user characteristic information and the attention relationship between users to build a social platform user relationship network, effectively integrate the user characteristics and attention relationship of social platform data, and capture the potential data flow between users.
以基于图神经网络的社交平台为例,通过本例,攻击者可以在社交平台上注入少量恶意用户来降低图神经网络在社交平台的数据挖掘性能。Taking the social platform based on the graph neural network as an example, through this example, the attacker can inject a small number of malicious users on the social platform to reduce the data mining performance of the graph neural network on the social platform.
参照图1,针对图神经网络的黑盒逃逸图注入攻击方法,包括如下步骤:Referring to Figure 1, the black-box escape graph injection attack method for graph neural network includes the following steps:
步骤一:获取代理社交平台用户关系图数据集Gsur,从众包平台等渠道获取代理图数据集Gsur=(Vsur,Esur,Xsur,Lsur),其中Vsur为节点集,表示图数据集Gsur中所有的用户,Esur为边集,表示用户之间的关注关系,Xsur为节点集Vsur的所有用户特征拼接而得的特征矩阵,Lsur为节点集Vsur中每个节点所对应的标签组成的标签集;Step 1: Obtain the user relationship graph data set G sur of the agent social platform, and obtain the agent graph data set G sur = (V sur , E sur , X sur , L sur ) from crowdsourcing platforms and other channels, where V sur is a node set, representing All the users in the graph data set G sur , E sur is the edge set, which represents the attention relationship between users, X sur is the feature matrix obtained by concatenating all user features of the node set V sur , L sur is the node set V sur A label set composed of labels corresponding to each node;
步骤二:训练代理模型fsur:选择一个有代表性的图神经网络模型作为代理模型fsur,例如图卷积网络GCN或图注意力网络GAT等图神经网络模型,将代理数据集Gsur划分为训练集、验证集和测试集,然后在代理数据集Gsur训练fsur直到其收敛;Step 2: Train the proxy model f sur : select a representative graph neural network model as the proxy model f sur , such as graph convolutional network GCN or graph attention network GAT and other graph neural network models, and divide the proxy dataset G sur into For the training set, verification set and test set, then train f sur on the proxy dataset G sur until it converges;
步骤三:获取原始受害者社交平台用户关系图数据集G,根据攻击目标从众包平台等渠道中获取原始受害者社交平台用户关系图数据集G=(V,E,X,L),其中V为节点集,表示G中所有的用户,E为边集,表示用户之间的关注关系,x为节点集V的所有用户特征拼接而得的特征矩阵,L为节点集V中每个节点所对应的标签组成的标签集,由于黑盒设定下无法获取原始受害者图数据集G中的标签集L,故L在初始状态时为空集;Step 3: Obtain the original victim social platform user relationship graph dataset G, and obtain the original victim social platform user relationship graph dataset G=(V, E, X, L) from channels such as crowdsourcing platforms according to the attack target, where V is the node set, representing all users in G, E is the edge set, representing the attention relationship between users, x is the feature matrix obtained by concatenating all user features of node set V, L is each node in node set V The label set composed of corresponding labels, because the label set L in the original victim map data set G cannot be obtained under the black box setting, so L is an empty set in the initial state;
步骤四:使用代理模型fsur为原始受害者社交平台用户关系图数据集G生成标签集L,由于黑盒设定下无法获取原始受害者社交平台用户关系图数据集G中的标签集L,故需要使用已经收敛的代理模型fsur预测原始受害者社交平台用户关系图数据集G中所有节点的类概率,并将fsur预测的概率值最大的类别作为节点的伪标签,所有节点的伪标签集作为标签集L;Step 4: Use the proxy model f sur to generate a label set L for the original victim social platform user relationship graph dataset G. Since the label set L in the original victim social platform user relationship graph dataset G cannot be obtained under the black box setting, Therefore, it is necessary to use the converged proxy model f sur to predict the class probabilities of all nodes in the original victim social platform user relationship graph dataset G, and use the category with the largest probability value predicted by f sur as the pseudo-label of the node, and the pseudo-label of all nodes Label set as label set L;
步骤五:计算注入节点集Vinj的候选邻居节点集Vnei,根据原始受害者社交平台用户关系图数据集G中的关注关系和用户特征,计算用户节点的信息域即ID、分类间隔即CM以及对抗脆弱性即AF;Step 5: Calculate the candidate neighbor node set V nei of the injected node set V inj , and calculate the information domain of the user node, which is ID, and the classification interval, which is CM, according to the following relationship and user characteristics in the original victim social platform user relationship graph dataset G and anti-fragility or AF;
用户节点v的信息域ID定义为它的k阶邻居数量的乘积,形式化为:The information domain ID of user node v is defined as the product of the number of its k-order neighbors, which is formalized as:
其中是节点第k阶关注者的个数,用户节点的信息域可以度量节点从其他用户接收信息的能力,用户节点v的分类间隔CM定义为正确的类别对应的概率,跟其他类别中最大的概率的差,形式化为:in is the number of followers of the k-th order of the node. The information field of the user node can measure the ability of the node to receive information from other users. The classification interval CM of the user node v is defined as the probability corresponding to the correct category, and the largest probability in other categories The difference of , formalized as:
其中表示代理模型预测的用户节点属于yi类的概率,yt是代理模型预测的用户节点伪标签,根据用户节点的信息域及其分类间隔可以计算最终的用户节点对抗脆弱性AF,形式化为:in Indicates the probability that the user node predicted by the proxy model belongs to the category y i , and y t is the pseudo-label of the user node predicted by the proxy model. According to the information domain of the user node and its classification interval, the final user node anti-vulnerability AF can be calculated, which is formalized as :
计算出原始受害者社交平台用户关系图数据集G中所有用户节点的对抗脆弱性AF后,依据AF对节点进行升序排序,依次选择对抗脆弱性最大的用户节点加入注入用户节点的候选邻居节点集Vnei,直到满足攻击预算,即可得最终的候选邻居节点集Vnei;After calculating the anti-vulnerability AF of all user nodes in the original victim social platform user relationship graph dataset G, sort the nodes in ascending order according to AF, and select the user nodes with the greatest anti-vulnerability to join the set of candidate neighbor nodes injected into the user node V nei , until the attack budget is satisfied, the final candidate neighbor node set V nei can be obtained;
步骤六:构建注入用户节点集Vinj中的用户节点与原始受害者社交平台用户关系图数据集G的关注关系,在确定候选邻居节点集Vnei之后,基于Vnei中节点的噪声类别和注入节点的边攻击预算Einj,进一步细粒度划分候选节点集Vnei,将相同噪声类的Einj个节点划分为一个簇,每个簇视为一个注入用户节点的候选邻居集,具体来说,使用代理模型fsur预测的最佳错误类c′作为分组基础,即:Step 6: Construct the attention relationship between the user nodes in the injected user node set V inj and the original victim social platform user relationship graph data set G, after determining the candidate neighbor node set V nei , based on the noise category of the nodes in V nei and the injected The node’s edge attack budget E inj further fine-grainedly divides the candidate node set V nei , divides E inj nodes of the same noise class into a cluster, and each cluster is regarded as a set of candidate neighbors injected into the user node. Specifically, Use the best error class c′ predicted by the surrogate model f sur as the grouping basis, namely:
将具有相同最佳错误类的节点分组到同一簇中,为了增强注入用户节点的隐蔽性,候选邻居节点集Vnei中的节点只能作为一个注入用户节点的候选邻居;Group nodes with the same best error class into the same cluster, in order to enhance the concealment of injected user nodes, nodes in the candidate neighbor node set V nei can only be used as a candidate neighbor for injected user nodes;
步骤七:自适应优化注入用户节点的特征Xinj,运用C&W损失优化注入用户节点的特征,形式化为:Step 7: Adaptively optimize the feature X inj injected into the user node, and use the C&W loss to optimize the feature injected into the user node, formalized as:
然后使用min-max归一化对优化后的注入用户节点特征进行缩放,以保证注入用户节点特征不超出原始特征域的范围,形式化为:Then use min-max normalization to scale the optimized injected user node features to ensure that the injected user node features do not exceed the range of the original feature domain, formalized as:
其中,其中MAX为原始特征域的最大值,MIN为原始特征域的最小值,xmax为优化后恶意节点的最大值,xmin为优化后恶意节点的最小值,通过min-max归一化,恶意用户的特征不超过原始特征的最大值或最小值,从而进一步保证了其隐蔽性;注入用户节点的特征优化完成后,即得到包含注入用户节点的社交平台用户关系扰动图Gper;Among them, MAX is the maximum value of the original feature field, MIN is the minimum value of the original feature field, x max is the maximum value of the optimized malicious node, x min is the minimum value of the optimized malicious node, normalized by min-max , the feature of the malicious user does not exceed the maximum or minimum value of the original feature, thereby further ensuring its concealment; after the feature optimization of the injected user node is completed, the social platform user relationship disturbance graph G per including the injected user node is obtained;
步骤八:攻击基于图神经网络的社交平台模型fvit,在测试阶段且不改变基于图神经网络的社交平台模型fvit的模型参数的情况下,fvit将社交平台用户关系扰动图Gper作为输入并执行信息挖掘等下游任务,即完成黑盒逃逸图注入攻击攻击。Step 8: Attack the social platform model f vit based on the graph neural network. In the test phase without changing the model parameters of the social platform model f vit based on the graph neural network, f vit uses the social platform user relationship disturbance graph G per as Input and execute downstream tasks such as information mining, that is, to complete the black box escape graph injection attack.
Claims (1)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211580567.1A CN115809698A (en) | 2022-12-09 | 2022-12-09 | Black box escape map injection attack method for map neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211580567.1A CN115809698A (en) | 2022-12-09 | 2022-12-09 | Black box escape map injection attack method for map neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115809698A true CN115809698A (en) | 2023-03-17 |
Family
ID=85485430
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211580567.1A Withdrawn CN115809698A (en) | 2022-12-09 | 2022-12-09 | Black box escape map injection attack method for map neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115809698A (en) |
-
2022
- 2022-12-09 CN CN202211580567.1A patent/CN115809698A/en not_active Withdrawn
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wang et al. | Scalable attack on graph data by injecting vicious nodes | |
Fan et al. | One2multi graph autoencoder for multi-view graph clustering | |
CN110334742B (en) | A Reinforcement Learning-Based Graph Adversarial Example Generation Method by Adding Fake Nodes for Document Classification | |
CN111966698B (en) | A blockchain-based trusted federated learning method, system, device and medium | |
Song et al. | On the robustness of graph neural diffusion to topology perturbations | |
CN112231592B (en) | Graph-based network community discovery method, device, equipment and storage medium | |
CN111932386A (en) | User account determining method and device, information pushing method and device, and electronic equipment | |
CN115907029B (en) | Defense method and system for federated learning poisoning attack | |
CN110322003B (en) | A gradient-based graph adversarial example generation method by adding fake nodes for document classification | |
CN101901251B (en) | Cluster Structure Analysis and Identification Method of Complex Network Based on Markov Process Metastability | |
WO2021184367A1 (en) | Social network graph generation method based on degree distribution generation model | |
Pei et al. | Privacy-enhanced graph neural network for decentralized local graphs | |
CN115293919A (en) | Graph neural network prediction method and system for out-of-distribution generalization of social network | |
Du et al. | Structure tuning method on deep convolutional generative adversarial network with nondominated sorting genetic algorithm II | |
CN118690840A (en) | A method, system and storage medium for automatic cross-community identity association | |
Zhang et al. | Reconciling multiple social networks effectively and efficiently: An embedding approach | |
CN117933341B (en) | Graphic neural network method based on homography enhancement | |
CN112163170B (en) | A method and system for improving social network alignment based on virtual nodes and meta-learning | |
CN118863012A (en) | Adversarial sample robust federated learning method and system for heterogeneous data | |
CN118449729A (en) | A malicious traffic identification system, method, program, device and storage medium based on graph neural network and stable learning idea | |
Wang et al. | Discerning edge influence for network embedding | |
CN118316699A (en) | Malicious client detection method and device for encryption federal learning, electronic equipment and storage medium | |
CN115809698A (en) | Black box escape map injection attack method for map neural network | |
Han et al. | An attention‐based representation learning model for multiple relational knowledge graph | |
Zhuo et al. | ARDST: An Adversarial‐Resilient Deep Symbolic Tree for Adversarial Learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20230317 |
|
WW01 | Invention patent application withdrawn after publication |