CN115809698A - Black box escape map injection attack method for map neural network - Google Patents

Black box escape map injection attack method for map neural network Download PDF

Info

Publication number
CN115809698A
CN115809698A CN202211580567.1A CN202211580567A CN115809698A CN 115809698 A CN115809698 A CN 115809698A CN 202211580567 A CN202211580567 A CN 202211580567A CN 115809698 A CN115809698 A CN 115809698A
Authority
CN
China
Prior art keywords
node
sur
nodes
injection
victim
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202211580567.1A
Other languages
Chinese (zh)
Inventor
王金艳
苏琳琳
甘泽明
李先贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi Normal University
Original Assignee
Guangxi Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi Normal University filed Critical Guangxi Normal University
Priority to CN202211580567.1A priority Critical patent/CN115809698A/en
Publication of CN115809698A publication Critical patent/CN115809698A/en
Withdrawn legal-status Critical Current

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a black box escape map injection attack method aiming at a map neural network, which comprises the following steps: the method comprises the following steps: obtaining a proxy graph data set G sur (ii) a Step two: training agent model f sur (ii) a Step three: acquiring an original victim graph data set G; step four: using a proxy model f sur Generating a label set L for the original victim graph data set G; step five: compute injection node set V inj Candidate neighbor node set V nei (ii) a Step six: constructing injection node set V inj Is connected to the topology of the original victim graph data set G; step seven: feature X of adaptively optimizing injection nodes inj (ii) a Step eight: attack victim model f vit . The method can ensure the attack performance and the attack concealment, and improve the on-map nerve of an attackerThe nodes of the network model classify attack performance on the task.

Description

Black box escape map injection attack method for map neural network
Technical Field
The invention relates to the technical field of deep learning, in particular to a black box escape map injection attack method for a neural network of a map.
Background
The graph is a non-Euclidean data structure composed of nodes and connection relations thereof, and has the capacity of representing complex coupling relations. Many real-life scenarios can be modeled using graph structures, such as social platforms, transportation systems, recommendation systems, and financial risk assessment systems. The deep learning model has strong learning ability and is widely researched in the fields of natural language processing, computer vision and the like, but due to the specific non-Euclidean data property of the graph, the traditional deep learning model of the Euclidean data has poor performance in processing graph structure data. Accordingly, much work has been devoted to better developing Graph Neural Networks (GNNs) to efficiently integrate node characteristics and topology information of Graph data, capturing potential data flows between nodes.
Although graphical neural networks have made good progress, they are susceptible to adversarial attacks, i.e., small but carefully designed perturbations can cause a large degradation in the performance of the graphical neural network model. Much prior work has been directed to the study of Graph Modification Attacks (GMAs) in an attempt to mislead predictions of a neural network model by adding small perturbations to existing nodes or topologies in the Graph, such as adding malicious edges between different types of nodes, or tampering with some node attributes, and so forth. However, in many real-world scenes, modifying data in an original Graph often requires extremely high operation authority, so more and more people are concerned about Graph Injection Attack (GIA) which is more suitable for a display scene, that is, an attacker attacks by generating a small number of malicious nodes and establishing connection with the nodes in the original Graph. For example, higher permissions are needed to tamper with existing user comments in a social network, and it is relatively simple to register a new user to post a malicious utterance to guide the public opinion. However, the current research on the graph injection attack method is still in the preliminary stage, and it is often difficult for an attacker to simultaneously consider the attack performance and the hiding performance of the attacker.
Therefore, an attack method needs to be injected into the black box escape graph of the graph neural network, so that an attacker can be enabled to inject malicious nodes in a targeted manner according to the topology and the node characteristics of graph data under the condition of the black box in the testing stage, and meanwhile, the concealment of the malicious nodes is improved while the characteristics of the malicious nodes are optimized, so that the destructiveness and the concealment of the attacker are both considered, and the attack performance of the attacker on the node classification task of the graph neural network model is improved.
Disclosure of Invention
The invention aims to provide a black box escape map injection attack method for a graph neural network aiming at the defect that the existing black box escape map injection attack technology cannot give consideration to the destructiveness and the concealment of an attacker, the method can ensure the attack performance and the attack concealment, and the attack performance of the attacker on a node classification task of a graph neural network model is improved.
The technical scheme for realizing the purpose of the invention is as follows:
the black box escape map injection attack method aiming at the neural network of the map comprises the following steps:
the method comprises the following steps: obtaining a proxy graph data set G sur Obtaining an agent graph data set G from a crowdsourcing platform channel sur =(V sur ,E sur ,X sur ,L sur ) In which V is sur Representing graph data set G for a set of nodes sur All nodes in (E) sur Is a set of edges, X, representing all edges that exist between nodes sur Is a node set V sur Is obtained by splicing all the features of (1) to obtain a feature matrix, L sur Is a node set V sur A label set consisting of labels corresponding to each node in the node;
step two: training agent model f sur Selecting a representative graph neural network model as a proxy model f sur Then in the proxy data set G sur Training f sur Until it converges;
step three: acquiring an original victim graph data set G, and acquiring the original victim graph data set G = (V, E, X, L) from a crowdsourcing platform channel according to an attack target, wherein V is a node set and represents all nodes in the graph data set G, E is an edge set and represents all edges existing among the nodes, X is a feature matrix obtained by splicing all features of the node set V, and L is a label set formed by labels corresponding to each node in the node set V;
step four: using a proxy model f sur Generating a label set L for the original victim graph data set G, wherein the label set L in the original victim graph data set G cannot be obtained under the black box setting, so that the converged proxy model f needs to be used sur Predict labels for all nodes in the original victim graph dataset G, and convert f sur The predicted label is used as a label set L;
step five: compute injection node set V inj Candidate neighbor node set V nei According to the topological structure and node attributes in the original victim graph data set G, calculating the Information Domain (ID) and classification interval (CM) of the nodes, calculating the final node anti-vulnerability (AF) according to the ID and the CM), and selecting the nodes with larger anti-vulnerability in the G as a candidate neighbor node set V of the injection nodes nei
Step six: constructing injection node set V inj Is topologically connected with the original victim graph data set G, and a candidate set V is determined nei Then based on V nei Noise class of middle node and edge attack budget E of injection node inj Further fine-grained partitioning of candidate node set V nei The same noise class E inj Each node is divided into a cluster, each cluster is regarded as a candidate neighbor set of an injection node, and in order to enhance the concealment of the injection node, a candidate neighbor node set V nei The node in (2) can only be used as a candidate neighbor of an injection node;
step seven: feature X of adaptively optimizing injection nodes inj Application of C&W loss optimization of the characteristics of the injection nodes is performed, the characteristics of the optimized injection nodes are limited to ensure that the characteristics of the injection nodes do not exceed the range of the original characteristic domain, and after the characteristics of the injection nodes are optimized, a disturbance graph G containing the injection nodes is obtained per
Step eight: attack victim model f vit In the test phase without changing the victim model f vit In the case of model parameters of (2), the victim model f vit Will disturb picture G per As input and perform downstream tasks.
The technical scheme has the advantages or beneficial effects that:
1. according to the technical scheme, the strategy of injecting malicious nodes is adopted to attack the neural network of the graph, the existing node attributes or topological features in the original graph do not need to be modified, and the attack technology is closer to a real scene;
2. according to the technical scheme, the harmfulness and the concealment of the attack are fully considered in the topology construction and feature optimization process of the injection node, and when the graph neural network model with certain defense capacity is attacked, the attack performance is obviously superior to that of the existing attack baseline model.
3. According to the technical scheme, the search space of subsequent operation is reduced through the anti-vulnerability of the computing nodes, the resource consumption of time and space is saved, and the attack can be applied to the scene of a large-scale graph;
the method can improve the attack concealment aiming at the graph neural network and obtain better attack precision.
Drawings
FIG. 1 is a flow chart of an embodiment.
Detailed Description
The invention is described in further detail below with reference to the following figures and specific examples, but the invention is not limited thereto.
Example (b):
the social platform is one of the most frequently contacted networks in the information age, and due to the characteristics of wide audience range, fast information transmission and the like of the social platform, the information in the social platform has great influence on social public opinion guidance and social value observation construction. The social platform based on the graph neural network can utilize the user characteristic information and the attention relationship between users to construct a social platform user relationship network, effectively integrate the user characteristics and the attention relationship of social platform data, and capture potential data streams between the users.
Taking a social platform based on the graph neural network as an example, with the example, an attacker can inject a small number of malicious users on the social platform to reduce the data mining performance of the graph neural network on the social platform.
Referring to fig. 1, the black box escape map injection attack method for the neural network comprises the following steps:
the method comprises the following steps: obtaining a proxy social platform user relationship graph data set G sur Obtaining the agent graph data set G from channels such as a crowdsourcing platform and the like sur =(V sur ,E sur ,X sur ,L sur ) In which V is sur For a set of nodes, a presentation graph data set G sur All users in, E sur Representing the attention relationship between users for an edge set, X sur Is a node set V sur Is spliced to obtain a feature matrix, L sur Is a node set V sur A label set consisting of labels corresponding to each node in the node;
step two: training agent model f sur : selecting a representative graph neural network model as a proxy model f sur A graph neural network model, such as a Graph Convolution Network (GCN) or a graph attention network (GAT), to represent the data set G sur Divided into training set, verification set and test set, and then in the proxy data set G sur Training f sur Until it converges;
step three: acquiring an original victim social platform user relationship graph data set G, acquiring an original victim social platform user relationship graph data set G = (V, E, X, L) from channels such as a crowdsourcing platform and the like according to an attack target, wherein V is a node set and represents all users in G, E is an edge set and represents attention relationships among the users, X is a feature matrix formed by splicing features of all users of the node set V, and L is a label set formed by labels corresponding to each node in the node set V;
step four: using a proxy model f sur Generating a label set L for the original victim social platform user relationship graph data set G, wherein the original victim social platform user relationship graph cannot be obtained under the setting of a black boxThe label set L in the data set G, so that the converged proxy model f needs to be used sur Predicting class probabilities of all nodes in original victim social platform user relationship graph data set G, and converting f sur The category with the maximum predicted probability value is used as a pseudo label of the node, and the pseudo label sets of all the nodes are used as a label set L;
step five: compute injection node set V inj Candidate neighbor node set V nei Calculating an Information Domain (ID), a classification interval (CM) and an anti-vulnerability (AF) of a user node according to the attention relationship and the user characteristics in the original victim social platform user relationship graph data set G;
the information field ID of a user node v is defined as the product of its k-th order neighbor number, formalized as:
Figure BDA0003990879020000041
wherein
Figure BDA0003990879020000042
The number of the k-th order followers of the node is, the information domain of the user node can measure the capability of the node for receiving information from other users, the classification interval CM of the user node v is defined as the probability corresponding to the correct category, and the difference between the classification interval CM of the user node v and the maximum probability in other categories is formalized as:
Figure BDA0003990879020000043
wherein
Figure BDA0003990879020000044
User node belonging to y representing proxy model prediction i Probability of class, y t The method is a user node pseudo label predicted by an agent model, and can calculate the final user node anti-vulnerability AF according to the information domain of the user node and the classification interval thereof, and the method is characterized by being formed as follows:
Figure BDA0003990879020000051
after the anti-vulnerability AF of all user nodes in the original victim social platform user relationship graph data set G is calculated, the nodes are sorted in an ascending order according to the AF, and the user node with the highest anti-vulnerability is selected in sequence to be added into a candidate neighbor node set V injected into the user nodes nei Until the attack budget is met, the final candidate neighbor node set V can be obtained nei
Step six: building a set of injection user nodes V inj The concern relationship between the user node in (1) and the original victim social platform user relationship graph data set G is determined in a candidate neighbor node set V nei Then based on V nei Noise class of middle node and edge attack budget E of injection node inj Further fine-grained division of candidate node set V nei The same noise class E inj The nodes are divided into clusters, each cluster is regarded as a candidate neighbor set injected into the user node, and particularly, a proxy model f is used sur The predicted best error class c' is taken as the grouping basis, namely:
Figure BDA0003990879020000052
grouping nodes with the same optimal error class into the same cluster, and in order to enhance the concealment of the injected user nodes, a candidate neighbor node set V nei The node in the network can only be used as a candidate neighbor injected into the user node;
step seven: feature X for adaptively optimizing injection into user nodes inj Application of C&The characteristics of the W loss optimization injection user node are formalized as follows:
Figure BDA0003990879020000053
and then, scaling the optimized injection user node characteristics by using min-max normalization to ensure that the injection user node characteristics do not exceed the range of the original characteristic domain, wherein the method is characterized by comprising the following steps:
Figure BDA0003990879020000054
wherein MAX is the maximum value of the original feature domain, MIN is the minimum value of the original feature domain, and x max Maximum value, x, of the optimized malicious node min For the minimum value of the optimized malicious node, the characteristics of the malicious user do not exceed the maximum value or the minimum value of the original characteristics through min-max normalization, so that the concealment of the malicious user is further ensured; after the characteristic optimization of the injected user nodes is completed, the social platform user relationship disturbing graph G containing the injected user nodes is obtained per
Step eight: attack social platform model f based on graph neural network vit In the testing stage, the social platform model f based on the graph neural network is not changed vit In the case of model parameters of (2), f vit Disturbing social platform user relationships as graph G per And performing downstream tasks such as information mining and the like as input, namely completing the attack of black box escape map injection.

Claims (1)

1. The black box escape map injection attack method for the neural network of the map is characterized by comprising the following steps of:
the method comprises the following steps: obtaining a proxy graph data set G sur Obtaining an agent graph data set G from a crowdsourcing platform channel sur =(V sur ,E sur ,X sur ,L sur ) In which V is sur Representing graph data set G for a set of nodes sur All nodes in (E) sur Is a set of edges, X, representing all edges that exist between nodes sur Is a node set V sur Is obtained by splicing all the features of (1) to obtain a feature matrix, L sur Is a node set V sur A label set consisting of labels corresponding to each node in the node;
step two: training agent model f sur Selecting a representative graph neural network model as a proxy model f sur Then generation is carried outSet of physical data G sur Training f sur Until convergence;
step three: acquiring an original victim graph data set G, and acquiring an original victim graph data set G = (V, E, X, L) from a crowdsourcing platform channel according to an attack target, wherein V is a node set and represents all nodes in the graph data set G, E is an edge set and represents all edges existing among the nodes, X is a feature matrix obtained by splicing all features of the node set V, and L is a label set formed by labels corresponding to all nodes in the node set V;
step four: using a proxy model f sur Generating a label set L for the original victim graph data set G, wherein the label set L in the original victim graph data set G cannot be obtained under the black box setting, so that the converged proxy model f needs to be used sur Predict labels of all nodes in the original victim graph dataset G and assign f sur The predicted label is used as a label set L;
step five: compute injection node set V inj Candidate neighbor node set V nei According to the topological structure and node attributes in an original victim graph data set G, calculating an Information Domain (ID) and a Classification interval (CM) of a node, calculating the final node anti-vulnerability (AF) according to the ID and the CM, and selecting the node with higher anti-vulnerability in the G as a candidate neighbor node set V of an injection node nei
Step six: constructing injection node set V inj Is topologically connected with the original victim graph data set G, and a candidate set V is determined nei Then based on V nei Noise class of middle node and edge attack budget E of injection node inj Further fine-grained partitioning of candidate node set V nei The same noise class E inj Each node is divided into a cluster, each cluster is regarded as a candidate neighbor set of an injection node, and in order to enhance the concealment of the injection node, a candidate neighbor node set V nei The node in (2) can only be used as a candidate neighbor of an injection node;
step seven: feature X of adaptively optimizing injection nodes inj Application of C&W loss optimization of the characteristics of the injection nodes, and limitation of the optimized characteristics of the injection nodes to ensure that the characteristics of the injection nodes do not exceed the range of the original characteristic domain, and obtaining a disturbance graph G containing the injection nodes after the optimization of the characteristics of the injection nodes is completed per
Step eight: attack victim model f vit In the test phase without changing the victim model f vit In the case of model parameters of (2), the victim model f vit Will disturb picture G per As input and perform downstream tasks.
CN202211580567.1A 2022-12-09 2022-12-09 Black box escape map injection attack method for map neural network Withdrawn CN115809698A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211580567.1A CN115809698A (en) 2022-12-09 2022-12-09 Black box escape map injection attack method for map neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211580567.1A CN115809698A (en) 2022-12-09 2022-12-09 Black box escape map injection attack method for map neural network

Publications (1)

Publication Number Publication Date
CN115809698A true CN115809698A (en) 2023-03-17

Family

ID=85485430

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211580567.1A Withdrawn CN115809698A (en) 2022-12-09 2022-12-09 Black box escape map injection attack method for map neural network

Country Status (1)

Country Link
CN (1) CN115809698A (en)

Similar Documents

Publication Publication Date Title
Fan et al. One2multi graph autoencoder for multi-view graph clustering
US11113293B2 (en) Latent network summarization
Yan et al. An em framework for online incremental learning of semantic segmentation
CN111932386A (en) User account determining method and device, information pushing method and device, and electronic equipment
CN117201122B (en) Unsupervised attribute network anomaly detection method and system based on view level graph comparison learning
CN110322003B (en) Gradient-based graph confrontation sample generation method for document classification by adding false nodes
CN111767472A (en) Method and system for detecting abnormal account of social network
CN112101358B (en) Method for aligning phrase and picture region based on decoupling and intervention graph network
CN115834153A (en) Node voting mechanism-based black box attack device and method for graph neural network model
CN117633707A (en) Fine-grained multi-mode Chinese large language model construction method and computer storage medium
Guo et al. Taming over-smoothing representation on heterophilic graphs
Fu et al. Robust representation learning for heterogeneous attributed networks
Xu et al. Rethinking Label Flipping Attack: From Sample Masking to Sample Thresholding
CN115809698A (en) Black box escape map injection attack method for map neural network
CN115470520A (en) Differential privacy and denoising data protection method under vertical federal framework
CN113297575B (en) Multi-channel graph vertical federal model defense method based on self-encoder
Lin et al. Multi-View Block Matrix-Based Graph Convolutional Network.
CN114329099A (en) Overlapping community identification method, device, equipment, storage medium and program product
Wu et al. Inference Attacks in Machine Learning as a Service: A Taxonomy, Review, and Promising Directions
Kato et al. EdgePruner: Poisoned Edge Pruning in Graph Contrastive Learning
CN117933341B (en) Graphic neural network method based on homography enhancement
Lu et al. [Retracted] Application of Collaborative Filtering and Data Mining Technology in Personalized National Music Recommendation and Teaching
CN113283394B (en) Pedestrian re-identification method and system integrating context information
CN117540828B (en) Training method and device for training subject recommendation model, electronic equipment and storage medium
US20220327419A1 (en) Increasing inclusivity in machine learning outputs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20230317