CN115906980A - GAT (goal-oriented programming) graph neural network defense method, construction method and pedestrian detection method - Google Patents

GAT (goal-oriented programming) graph neural network defense method, construction method and pedestrian detection method Download PDF

Info

Publication number
CN115906980A
CN115906980A CN202211412927.7A CN202211412927A CN115906980A CN 115906980 A CN115906980 A CN 115906980A CN 202211412927 A CN202211412927 A CN 202211412927A CN 115906980 A CN115906980 A CN 115906980A
Authority
CN
China
Prior art keywords
gat
neural network
node
graph neural
defense
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211412927.7A
Other languages
Chinese (zh)
Other versions
CN115906980B (en
Inventor
鲁鸣鸣
郭清明
常佳宇
欧阳凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN202211412927.7A priority Critical patent/CN115906980B/en
Publication of CN115906980A publication Critical patent/CN115906980A/en
Application granted granted Critical
Publication of CN115906980B publication Critical patent/CN115906980B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Traffic Control Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a GAT (Gate associated technology) graph neural network defense method, which comprises the steps of obtaining a network structure and parameters of a GAT graph neural network; modifying a function g in the GAT graph neural network; adding the attention of the previous layer of the GAT graph neural network into the weight of the current layer of the network; designing a training stage of K neighbor operation and randomly adding a GAT (generalized expectation-maximization) graph neural network; completing the defense of the GAT graph neural network. The invention also discloses a construction method comprising the GAT graph neural network defense method and a pedestrian detection method comprising the construction method. According to the invention, through an innovative robust message transmission mechanism, the defense of the GAT graph neural network is realized, the robustness of the model is greatly improved, the defense effect is good, and the reliability and the stability are high.

Description

GAT (goal-oriented programming) graph neural network defense method, construction method and pedestrian detection method
Technical Field
The invention belongs to the technical field of artificial intelligence, and particularly relates to a GAT (goal oriented programming) graph neural network defense method, a construction method and a pedestrian detection method.
Background
With the development of economic technology and the improvement of living standard of people, the artificial intelligence technology is widely applied to the production and the life of people, and brings endless convenience to the production and the life of people.
Graph Neural Networks (GNNs) are an important component of artificial intelligence technology. The graph neural network is widely applied to the fields of chemistry, physics, traffic, knowledge maps, recommendation systems and the like, and shows a strong application prospect. Therefore, the study of the neural network has been an important work for researchers.
The GAT network (Graph Attention network) is one of Graph neural networks, the GAT network processes Graph structure data, and neighbor nodes are aggregated through an Attention Mechanism (Attention Mechanism), so that adaptive distribution of different neighbor weights is realized, and the expression capability of the Graph neural network is greatly improved. However, in recent years, it has been found that similar to a deep Neural network such as a conventional Convolutional Neural Network (CNN), the GAT network also has a problem of lack of robustness. This problem makes GAT networks very vulnerable to challenge attacks, which require only a small perturbation in the data to quickly degrade the model performance. For the fields with higher safety requirements such as banks and finance, the robustness of the GAT network model is particularly important.
At present, researches indicate that the attack method of the GAT network tends to adopt a scheme of adding edges and connecting nodes with larger characteristic difference; for such attack schemes, researchers have proposed a corresponding defense scheme for the GAT network, i.e., preprocessing data using the Jaccard similarity. However, these technical solutions can only process discrete data, artificially delete edges smaller than a threshold, but cannot learn in an end-to-end manner, and do not establish a corresponding defense mechanism according to the features of the attack method. This makes the existing defense method for GAT network still not reliable.
For this reason, when the existing GAT network is applied to the field of pedestrian detection, it may be attacked due to poor defensive performance, so that the detection fails or detection errors occur, and the reliability of the pedestrian detection process is seriously affected.
Disclosure of Invention
An object of the present invention is to provide a reliable and stable method for defending a GAT graph neural network.
The invention also aims to provide a construction method comprising the GAT diagram neural network defense method.
The invention also aims to provide a pedestrian detection method comprising the construction method.
The GAT diagram neural network defense method provided by the invention comprises the following steps:
s1, acquiring a network structure and network parameters of a GAT (generalized interference tomography) graph neural network;
s2, calculating and converting cosine similarity of characteristics among nodes in the GAT graph neural network according to the data information obtained in the step S1, and modifying a function g in the GAT graph neural network by adopting the converted similarity;
s3, based on the modification of the step S2, adding the attention of the previous layer of the GAT graph neural network into the weight of the current layer of the network;
s4, after filtering out abnormal node information, designing K neighbor operation based on a K neighbor algorithm;
s5, in the training stage of the GAT graph neural network, randomly adding the K neighbor operation designed in the step S4 to realize the characteristic enhancement of the node;
and S6, completing the defense of the GAT diagram neural network.
Step S2, calculating and converting cosine similarity of features between nodes in the GAT graph neural network, and modifying a function g in the GAT graph neural network by using the converted similarity, specifically including the following steps:
the following formula is adopted as the modified function
Figure BDA0003938761160000031
Figure BDA0003938761160000032
In the formula
Figure BDA0003938761160000033
For node v in neural network of graph i Is characterized in the l-th layer
Figure BDA0003938761160000034
σ () is the ReLU activation function, W 1 l Is a node v 1 Weight matrix at level l>
Figure BDA0003938761160000035
Is a weighting factor and->
Figure BDA0003938761160000036
Figure BDA0003938761160000037
Is a node v 2 Weight matrix at level l>
Figure BDA0003938761160000038
Is and node v i A set of connected nodes; s () is a sigmoid function for adjusting the scaling, an
Figure BDA0003938761160000039
Beta is a learnable hyper-parameter; sim () is a similarity calculation function, an
Figure BDA00039387611600000310
Figure BDA00039387611600000311
Is a node v i Transposition of feature vectors, | h i | is node v i The modulo length of the eigenvector.
Step S3, adding the attention of the previous layer of the GAT graph neural network to the weight of the current layer of the network specifically includes the following steps:
the weighting coefficients are modified to add the attention of the previous layer of the GAT graph neural network to the weights of the current layer of the network using the following equation:
Figure BDA0003938761160000041
in the formula
Figure BDA0003938761160000042
The weight coefficient of the current layer; l is the l-th layer in the neural network of the graph; />
Figure BDA0003938761160000043
Is the modified function g obtained in step S2; />
Figure BDA0003938761160000044
Is the weight coefficient of the previous layer.
The K nearest neighbor operation designed based on the K nearest neighbor algorithm in the step S4 specifically comprises the following steps:
the K nearest neighbor operation specifically comprises the following steps: based on a K neighbor algorithm, selecting K nodes with the maximum similarity to the central node through the calculation of the K neighbor nodes, and connecting the central node with the K nodes;
and during connection, directed edge connection is adopted, and the direction of the directed edge points to the central node.
In the training stage of the GAT graph neural network, the K nearest neighbor operation designed in step S4 is added at random, which specifically includes the following steps:
and adopting the following formula as a characteristic updating formula after the K nearest neighbor operation designed in the step S4 is added:
Figure BDA0003938761160000045
in the formula
Figure BDA0003938761160000046
Is an updated feature; KNN (x) i ) An operation function of K-nearest neighbor operation designed for step S4, and KNN (x) i )=RANK(sim(x i V), K), RANK () is the computation and node v i Function of the most similar set of first K node indices, sim (x) i Is a computing node v i Characteristic x of i K represents the number of nodes with the maximum similarity to the central node; p is the probability that the model is randomly added with KNN to carry out node enhancement in the training stage; e is a n Is a threshold value for the probability.
The invention also discloses a construction method of the GAT graph neural network defense method, which specifically comprises the following steps:
A. determining a target GAT map neural network;
B. b, defending the target GAT diagram neural network determined in the step A by adopting the GAT diagram neural network defense method;
C. and after defense, obtaining a GAT (Gate associated technology) graph neural network after final defense, and completing the construction of the GAT graph neural network.
The invention also discloses a pedestrian detection method comprising the construction method, which specifically comprises the following steps:
a. determining an initial GAT (Gauss transform) map neural network model adopted by pedestrian detection in the automatic driving process;
b. b, constructing the GAT map neural network by adopting the initial GAT map neural network model determined in the step a and the construction method;
c. and c, adopting the GAT graph neural network obtained in the step b to detect the pedestrian in the automatic driving process.
According to the GAT map neural network defense method, the construction method and the pedestrian detection method, through an innovative robust message transmission mechanism, the defense of the GAT map neural network is realized, the robustness of a model is improved, the defense effect is good, and the reliability and the stability are high.
Drawings
FIG. 1 is a schematic flow chart of the defense method of the present invention.
FIG. 2 is a schematic diagram showing the comparison between the effectiveness test of the defense method of the present invention and the prior art.
FIG. 3 is a data schematic comparing the effectiveness test of the defense method of the present invention with that of the prior art.
FIG. 4 is a schematic flow chart of the construction method of the present invention.
FIG. 5 is a flowchart illustrating a method of detecting a pedestrian according to the present invention.
Detailed Description
FIG. 1 is a schematic flow chart of the defense method of the present invention: the GAT diagram neural network defense method provided by the invention comprises the following steps:
s1, acquiring a network structure and network parameters of a GAT (generalized interference tomography) graph neural network;
s2, calculating and converting cosine similarity of characteristics among nodes in the GAT graph neural network according to the data information obtained in the step S1, and modifying a function g in the GAT graph neural network by adopting the converted similarity; the method specifically comprises the following steps:
the core of the step is to calculate the cosine similarity of the characteristics between the nodes in the graph, convert the output result of the similarity into a value between 0 and 1 through an s function which automatically learns and adjusts the scaling, and design the function g in the GAT into the attention based on the characteristic similarity by utilizing the similarity so as to realize the robust message transfer;
as a function of the modification, the following equation is adopted
Figure BDA0003938761160000061
Figure BDA0003938761160000062
In the formula
Figure BDA0003938761160000063
For node v in neural network of graph i Is characterized in the l-th layer
Figure BDA0003938761160000064
σ () is the ReLU activation function, W 1 l Is a node v 1 Weight matrix at level l>
Figure BDA0003938761160000065
Is a weighting factor and->
Figure BDA0003938761160000066
Figure BDA0003938761160000067
Is a node v 2 Weight matrix at level l>
Figure BDA0003938761160000068
Is and node v i A set of connected nodes; s () is a sigmoid function for adjusting the scaling, an
Figure BDA0003938761160000069
Beta is a learnable hyper-parameter; sim () is a similarity calculation function, an
Figure BDA00039387611600000610
Figure BDA00039387611600000611
Is a node v i Transposition of feature vectors, | h i | is node v i The modular length of the feature vector; giving corresponding weight according to the feature similarity between the nodes;
s3, based on the modification of the step S2, adding the attention of the previous layer of the GAT graph neural network into the weight of the current layer of the network; the method specifically comprises the following steps:
it is assumed that if two nodes are dissimilar, the node features of the next hidden layer after feature transformation and propagation should also be dissimilar, i.e., the weight between the two nodes should still be small. However, weights are assigned according to node similarity, with weights at each layer of the network
Figure BDA00039387611600000612
The calculation of (a) is based only on the input features of the current layer, and does not take into account the information that has been obtained before;
therefore, the weighting coefficients are modified to add the attention of the previous layer of the GAT graph neural network to the weights of the current layer of the network using the following equation:
Figure BDA0003938761160000071
in the formula
Figure BDA0003938761160000072
The weight coefficient of the current layer; l is the l-th layer in the graph neural network; />
Figure BDA0003938761160000073
Is the modified function g obtained in step S2; />
Figure BDA0003938761160000074
The weight coefficient of the previous layer;
s4, after filtering out abnormal node information, designing K neighbor operation based on a K neighbor algorithm; the method specifically comprises the following steps:
the K nearest neighbor operation specifically comprises the following steps: based on a K neighbor algorithm, selecting K nodes with the maximum similarity to the central node through the calculation of the K neighbor nodes, and connecting the central node with the K nodes;
when in connection, directed edge connection is adopted, and the direction of the directed edge points to the central node;
s5, in a training stage of the GAT neural network, randomly adding the K neighbor operation designed in the step S4 to realize the characteristic enhancement of the node and enable the model to focus more on the information of the similar node; the method specifically comprises the following steps:
the following formula is adopted as a feature updating formula after the K nearest neighbor operation designed in the step S4 is added:
Figure BDA0003938761160000075
in the formula
Figure BDA0003938761160000076
Is an updated feature; KNN (x) i ) An operation function of K-nearest neighbor operation designed for step S4, and KNN (x) i )=RANK(sim(x i V), K), RANK () is the computation and node v i Function of the most similar first K node index sets, sim (x) i Is a computing node v i Characteristic x of i K represents the number of nodes with the maximum similarity to the central node; p is the probability that the model randomly adds KNN to carry out node enhancement in the training stage; e is the same as n Is a threshold value of probability;
firstly, adding KNN and then updating the model characteristics when the model just starts to train; however, as the model is trained, KNN is increasingly not helpful in improving performance, which may be due to the fact that the newly added edges are not the edges actually present in the data, and the model may over-fit the artificially added data; therefore, after a certain stage of training, the influence of KNN can be gradually reduced, and the model can be more fitted with the original data;
and S6, completing the defense of the GAT diagram neural network.
The defense properties of the method of the invention are illustrated below by an example.
In order to evaluate the defense performance of the method, the embodiment performs an experiment of node classification under popular target attack and non-target attack. The target attack is an attack directed at a specific target node, and an attacker attacks nodes on a part of the test set, so that the accuracy of the model on the nodes is reduced. In this setting, the embodiment adopts a popular Nettack attack method. The non-target attack aims to reduce the classification accuracy of the model on all test sets through attack, is a global attack, and adopts a popular Mettack attack method under the arrangement.
And (3) resisting the target attack:
the target attack is classified into a direct attack that changes the connection of the target node with the neighbor node and an indirect attack that changes the connection of the neighbor node of the target node with its neighbor node. Since the direct attack is the strongest aggression, in order to verify the robustness of the method of the present invention, the present embodiment employs the direct attack in the experiment. Nettack is a popular target attack method, and the embodiment uses default parameter settings in the original papers of the Nettack. Specifically, after the GCN (surrogate model) training is completed, the embodiment selects 10 nodes with the highest score, 10 nodes with the lowest score, and 20 nodes with random scores in the correct classification node set in the test set to attack respectively, and finally counts the accuracy of 40 nodes after the attack. The present embodiment attacks both edges and features. The accuracy of the model after Nettack direct attack is shown in fig. 2. It can be seen that the accuracy of the GCN based model is very low on each data set, largely because the surrogate model for Nettack is GCN. Nevertheless, the benchmark model SAGE-MP of this example performed less well on 3 datasets, but the performance of the process of the invention far exceeded SAGE-MP.
Fight against non-target attacks:
in the embodiment, mettack attack is adopted to test the performance of different models for resisting non-target attack under a node classification task, the parameters of the Mettack attack are set according to the original paper, and the most aggressive version Meta-Self is adopted. The disturbance Rate (PR) is a change Rate of an attacker attacking the rear side of the graph structure data, and the value change range is 0 to 25%, and the step size is 5%. The performance of the different models under Mettack attack on the Cora, citeser and Cora _ ML datasets is shown in fig. 3, where the best performance is shown in bold.
From the data in fig. 3 it can be seen that:
1) The Mettack attack causes the performance of all models to be reduced to different degrees;
2) Compared with the GAT based on the feature attention and the RGCN based on the variance attention, the method based on the similarity attention has better robustness;
3) The performance of SAGE-MP under Mettack attack is greatly reduced, and the robustness can be greatly improved by the SAGE-MP-based method.
Experiments show that compared with other methods, the method has the advantages that the defense performance is greatly improved, the more robust defense capability is realized, and the robustness and the accuracy are balanced.
FIG. 4 is a schematic flow chart of the method of the present invention: the invention discloses a construction method of the GAT diagram neural network defense method, which specifically comprises the following steps:
A. determining a target GAT map neural network;
B. b, defending the target GAT diagram neural network determined in the step A by adopting the GAT diagram neural network defense method;
C. and after defense, obtaining a GAT (Gate associated technology) graph neural network after final defense, and completing the construction of the GAT graph neural network.
The GAT map neural network defense method in the step B comprises the following steps:
B1. acquiring a network structure and network parameters of a GAT (generalized associated transform) graph neural network;
B2. calculating cosine similarity of characteristics among nodes in the GAT graph neural network according to the data information obtained in the step B1, converting the cosine similarity, and modifying a function g in the GAT graph neural network by adopting the converted similarity;
B3. based on the modification of step B2, adding the attention of the previous layer of the GAT graph neural network into the weight of the current layer of the network;
B4. after filtering out abnormal node information, designing K neighbor operation based on a K neighbor algorithm;
B5. in the training stage of the GAT graph neural network, randomly adding the K nearest neighbor operation designed in the step B4 to realize the characteristic enhancement of the node;
B6. completing the defense of the GAT graph neural network.
Calculating cosine similarity of characteristics among nodes in the GAT graph neural network and converting the cosine similarity, and modifying a function g in the GAT graph neural network by adopting the converted similarity, wherein the step B2 specifically comprises the following steps:
as a function of the modification, the following equation is adopted
Figure BDA0003938761160000101
Figure BDA0003938761160000102
In the formula
Figure BDA0003938761160000103
For node v in neural network of graph i Is characterized in the l-th layer
Figure BDA0003938761160000104
σ () is the ReLU activation function, W 1 l Is a node v 1 Weight matrix at the l-th layer, <' >>
Figure BDA0003938761160000105
Is a weight coefficient and->
Figure BDA0003938761160000106
Figure BDA0003938761160000107
Is a node v 2 Weight matrix at the l-th layer, <' >>
Figure BDA0003938761160000108
Is and node v i A set of connected nodes; s () is a sigmoid function for adjusting the scaling, and
Figure BDA0003938761160000111
beta is a learnable hyper-parameter; sim () is a similarity calculation function, an
Figure BDA0003938761160000112
Figure BDA0003938761160000113
Is a node v i Transposition of feature vectors, | h i | is node v i The modulo length of the eigenvector.
The step B3 of adding the attention of the previous layer of the GAT graph neural network to the weight of the current layer of the network specifically includes the following steps:
the weighting coefficients are modified to add the attention of the previous layer of the GAT graph neural network to the weights of the current layer of the network using the following equation:
Figure BDA0003938761160000114
in the formula
Figure BDA0003938761160000115
Is the weight coefficient of the current layer; l is the l-th layer in the graph neural network; />
Figure BDA0003938761160000116
Is the modified function g obtained in step S2; />
Figure BDA0003938761160000117
The weighting coefficients of the previous layer.
The K nearest neighbor operation designed based on the K nearest neighbor algorithm in the step B4 specifically comprises the following steps:
the K nearest neighbor operation specifically comprises the following steps: based on a K neighbor algorithm, selecting K nodes with the maximum similarity with the central node through the calculation of the K neighbor nodes, and connecting the central node with the K nodes;
and during connection, directed edge connection is adopted, and the direction of the directed edge points to the central node.
In the training phase of the GAT graph neural network described in step B5, the K nearest neighbor operation designed in step B4 is added at random, specifically including the following steps:
and adopting the following formula as a characteristic updating formula after the K nearest neighbor operation designed in the step S4 is added:
Figure BDA0003938761160000118
in the formula
Figure BDA0003938761160000121
Is an updated feature; KNN (x) i ) An operation function of K-nearest neighbor operation designed for step S4, and KNN (x) i )=RANK(sim(x i V), K), RANK () is the computation and node v i Function of the most similar set of first K node indices, sim (x) i Is a computing node v i Characteristic x of i K represents the number of nodes with the maximum similarity to the central node; p is the probability that the model is randomly added with KNN to carry out node enhancement in the training stage; e is a n Is a threshold value for the probability.
Fig. 5 is a schematic flow chart of a pedestrian detection method of the present invention: the pedestrian detection method comprising the construction method disclosed by the invention specifically comprises the following steps:
a. determining an initial GAT map neural network model adopted by pedestrian detection in the automatic driving process;
b. b, constructing the GAT map neural network by adopting the initial GAT map neural network model determined in the step a and the construction method;
c. and c, adopting the GAT graph neural network obtained in the step b to detect the pedestrian in the automatic driving process.
The construction method of the step b specifically comprises the following steps:
b1. determining a target GAT graph neural network;
b2. defending the target GAT diagram neural network determined in the step b1 by adopting the GAT diagram neural network defense method;
b3. and after defense, obtaining the GAT graph neural network after final defense, and completing the construction of the GAT graph neural network.
The GAT map neural network defense method in the step b2 comprises the following steps:
b21. acquiring a network structure and network parameters of a GAT graph neural network;
b22. b21, calculating cosine similarity of characteristics among nodes in the GAT graph neural network according to the data information obtained in the step b21, converting the cosine similarity, and modifying a function g in the GAT graph neural network by adopting the converted similarity;
b23. based on the modification of step b22, adding the attention of the previous layer of the GAT graph neural network to the weight of the current layer of the network;
b24. after filtering out abnormal node information, designing K neighbor operation based on a K neighbor algorithm;
b25. in the training stage of the GAT graph neural network, randomly adding the K neighbor operation designed in the step b24 to realize the characteristic enhancement of the node;
b26. and completing the defense of the GAT graph neural network.
Calculating cosine similarity of characteristics among nodes in the GAT graph neural network and converting the cosine similarity, and modifying a function g in the GAT graph neural network by adopting the converted similarity, wherein the step b22 specifically comprises the following steps:
the following formula is adopted as the modified function
Figure BDA0003938761160000131
Figure BDA0003938761160000132
In the formula
Figure BDA0003938761160000133
For node v in neural network of graph i Features in the l-th layer and
Figure BDA0003938761160000134
σ () is the ReLU activation function, W 1 l Is a node v 1 Weight matrix at the l-th layer, <' >>
Figure BDA0003938761160000135
Is a weighting factor and->
Figure BDA0003938761160000136
Figure BDA0003938761160000137
Is a node v 2 Weight matrix at level l>
Figure BDA0003938761160000138
Is and node v i A set of connected nodes; s () is a sigmoid function for adjusting the scaling, and
Figure BDA0003938761160000139
beta is a learnable hyper-parameter; sim () is a similarity calculation function, an
Figure BDA00039387611600001310
Figure BDA00039387611600001311
Is a node v i Transposition of feature vectors, | h i | | is node v i The modulo length of the eigenvector.
The step b23 of adding the attention of the previous layer of the GAT graph neural network to the weight of the current layer of the network specifically includes the following steps:
the weighting coefficients are modified to add the attention of the previous layer of the GAT graph neural network to the weights of the current layer of the network using the following equation:
Figure BDA0003938761160000141
in the formula
Figure BDA0003938761160000142
Is the weight coefficient of the current layer; l is the l-th layer in the neural network of the graph; />
Figure BDA0003938761160000143
Is the modified function g obtained in step S2; />
Figure BDA0003938761160000144
Is the weight coefficient of the previous layer.
The K nearest neighbor operation designed based on the K nearest neighbor algorithm in the step b24 specifically comprises the following steps:
the K nearest neighbor operation specifically comprises the following steps: based on a K neighbor algorithm, selecting K nodes with the maximum similarity to the central node through the calculation of the K neighbor nodes, and connecting the central node with the K nodes;
and during connection, directed edge connection is adopted, and the direction of the directed edge points to the central node.
In the training phase of the GAT graph neural network described in step b25, the K nearest neighbor operation designed in step b24 is added at random, specifically including the following steps:
and adopting the following formula as a characteristic updating formula after the K nearest neighbor operation designed in the step S4 is added:
Figure BDA0003938761160000145
in the formula
Figure BDA0003938761160000146
Is an updated feature; KNN (x) i ) An operation function of K-nearest neighbor operation designed for step S4, and KNN (x) i )=RANK(sim(x i V), K), RANK () is the computation and node v i Function of the most similar set of first K node indices, sim (x) i Is a computing node v i Characteristic x of i K represents the number of nodes with the maximum similarity to the central node; p is model in training phaseProbability of randomly adding KNN to carry out node enhancement; e is the same as n Is a threshold value of probability.
The human detection method provided by the invention is particularly suitable for detecting pedestrians in the existing industry when vehicles are automatically driven; in the automatic driving process, people, vehicles, objects and other objects on roads need to be closely concerned, and therefore, a target identification technology based on vision or vehicle-mounted laser radars is needed. In order to improve the accuracy and interpretability of recognition, most of the current mainstream target recognition technologies adopt a graph neural network technology (i.e. a GAT graph neural network) which integrates an attention mechanism, because GAT can model the relationship among various parts of targets such as people, vehicles, objects and the like, for example, for the target recognition of pedestrians, a vehicle-mounted vision/laser radar sensor can acquire pictures of the targets of pedestrians, and the key points of the body skeletons of the pedestrians in the pictures and the relationship among the key points are modeled in the forms of nodes and edges in the GAT graph neural network. Because the safety requirement of automatic driving is high, if an attacker tries to attack by adding disturbance to the picture, the identification of the targets of the people, the vehicles and the objects facing the automatic driving is misjudged, and serious traffic accidents are caused. Therefore, the pedestrian detection method provided by the invention is adopted to detect pedestrians in real time, so that the detection precision and reliability can be ensured, the detection model has better defense performance, the graph neural network defense of robust message transmission can be realized, and the capability of identifying and defending attacks facing to the targets of people, vehicles and objects which are automatically driven is improved.

Claims (7)

1. A GAT map neural network defense method comprises the following steps:
s1, acquiring a network structure and network parameters of a GAT (generalized interference tomography) graph neural network;
s2, calculating and converting cosine similarity of characteristics among nodes in the GAT graph neural network according to the data information obtained in the step S1, and modifying a function g in the GAT graph neural network by adopting the converted similarity;
s3, based on the modification of the step S2, adding the attention of the previous layer of the GAT graph neural network into the weight of the current layer of the network;
s4, after filtering out abnormal node information, designing K neighbor operation based on a K neighbor algorithm;
s5, in the training stage of the GAT graph neural network, randomly adding the K neighbor operation designed in the step S4 to realize the characteristic enhancement of the node;
and S6, completing the defense of the GAT graph neural network.
2. The method for defending a GAT neural network according to claim 1, wherein the step S2 of calculating cosine similarity of features between nodes in the GAT neural network and converting the cosine similarity, and modifying a function g in the GAT neural network by using the converted similarity comprises the following steps:
the following formula is adopted as the modified function
Figure FDA0003938761150000011
Figure FDA0003938761150000012
In the formula
Figure FDA0003938761150000013
For node v in neural network of graph i Is characterized in the l-th layer
Figure FDA0003938761150000014
σ () is a ReLU activation function, <' > based on>
Figure FDA0003938761150000015
Is a node v 1 In the fifth or fifth place>
Figure FDA0003938761150000016
The weight matrix of the layer->
Figure FDA0003938761150000017
Is a weighting factor and->
Figure FDA0003938761150000018
Figure FDA0003938761150000019
Is a node v 2 At the fifth place>
Figure FDA00039387611500000110
The weight matrix of the layer->
Figure FDA00039387611500000111
Is and node v i A set of connected nodes; s () is a sigmoid function for adjusting the scaling, and
Figure FDA00039387611500000112
beta is a learnable hyper-parameter; sim () is a similarity calculation function, an
Figure FDA0003938761150000021
Figure FDA0003938761150000022
Is a node v i Transposition of feature vectors, | h i | is node v i The modulo length of the eigenvector.
3. The method for defending a GAT neural network as claimed in claim 2, wherein the step S3 of adding the attention of the previous layer of the GAT neural network to the weight of the current layer of the network specifically comprises the following steps:
the weighting coefficients are modified to add the attention of the previous layer of the GAT graph neural network to the weights of the current layer of the network using the following equation:
Figure FDA0003938761150000023
in the formula
Figure FDA0003938761150000024
The weight coefficient of the current layer; />
Figure FDA0003938761150000027
For the fifth/in the neural network of the map>
Figure FDA0003938761150000028
A layer; />
Figure FDA0003938761150000025
Is the modified function g obtained in step S2; />
Figure FDA0003938761150000026
Is the weight coefficient of the previous layer.
4. The GAT map neural network defense method of claim 3, wherein the K nearest neighbor operation is designed based on a K nearest neighbor algorithm in the step S4, and the method specifically comprises the following steps:
the K nearest neighbor operation specifically comprises the following steps: based on a K neighbor algorithm, selecting K nodes with the maximum similarity with the central node through the calculation of the K neighbor nodes, and connecting the central node with the K nodes;
and during connection, directed edge connection is adopted, and the direction of the directed edge points to the central node.
5. The method for defending a GAT neural network according to claim 4, wherein in the training phase of the GAT neural network in step S5, the K nearest neighbor operation designed in step S4 is added randomly, specifically comprising the following steps:
and adopting the following formula as a characteristic updating formula after the K nearest neighbor operation designed in the step S4 is added:
Figure FDA0003938761150000031
in the formula
Figure FDA0003938761150000032
Is an updated feature; KNN (x) i ) An operation function of the K nearest neighbor operation designed for step S4, and KNN (x) i )=RANK(sim(x i V), K), RANK () is the computation and node v i Function of the most similar first K node index sets, sim (x) i Is a computing node v i Characteristic x of i K represents the number of nodes with the maximum similarity to the central node; p is the probability that the model is randomly added with KNN to carry out node enhancement in the training stage; e is the same as n Is a threshold value for the probability.
6. A construction method comprising the GAT map neural network defense method of any one of claims 1 to 5, comprising the following steps:
A. determining a target GAT graph neural network;
B. defending the target GAT diagram neural network determined in the step A by adopting the GAT diagram neural network defense method of one of claims 1 to 5;
C. and after defense, obtaining the GAT graph neural network after final defense, and completing the construction of the GAT graph neural network.
7. A pedestrian detection method comprising the construction method of claim 6, specifically comprising the steps of:
a. determining an initial GAT (Gauss transform) map neural network model adopted by pedestrian detection in the automatic driving process;
b. b, constructing the GAT map neural network by adopting the initial GAT map neural network model determined in the step a and the construction method;
c. and c, adopting the GAT graph neural network obtained in the step b to detect the pedestrian in the automatic driving process.
CN202211412927.7A 2022-11-11 2022-11-11 Pedestrian detection method Active CN115906980B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211412927.7A CN115906980B (en) 2022-11-11 2022-11-11 Pedestrian detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211412927.7A CN115906980B (en) 2022-11-11 2022-11-11 Pedestrian detection method

Publications (2)

Publication Number Publication Date
CN115906980A true CN115906980A (en) 2023-04-04
CN115906980B CN115906980B (en) 2023-06-30

Family

ID=86472155

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211412927.7A Active CN115906980B (en) 2022-11-11 2022-11-11 Pedestrian detection method

Country Status (1)

Country Link
CN (1) CN115906980B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113806546A (en) * 2021-09-30 2021-12-17 中国人民解放军国防科技大学 Cooperative training-based method and system for defending confrontation of graph neural network
CN114708479A (en) * 2022-03-31 2022-07-05 杭州电子科技大学 Self-adaptive defense method based on graph structure and characteristics

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113806546A (en) * 2021-09-30 2021-12-17 中国人民解放军国防科技大学 Cooperative training-based method and system for defending confrontation of graph neural network
CN114708479A (en) * 2022-03-31 2022-07-05 杭州电子科技大学 Self-adaptive defense method based on graph structure and characteristics

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YA ZHANG等: "Urban Traffic Flow Forecast Based on FastGCRNN", JOURNAL OF ADVANCED TRANSPORTATION, vol. 2020, pages 1 - 9 *

Also Published As

Publication number Publication date
CN115906980B (en) 2023-06-30

Similar Documents

Publication Publication Date Title
CN106295637B (en) A kind of vehicle identification method based on deep learning and intensified learning
CN110941794A (en) Anti-attack defense method based on universal inverse disturbance defense matrix
CN111680292A (en) Confrontation sample generation method based on high-concealment universal disturbance
CN111310582A (en) Turbulence degradation image semantic segmentation method based on boundary perception and counterstudy
CN112200243B (en) Black box countermeasure sample generation method based on low query image data
CN114693983B (en) Training method and cross-domain target detection method based on image-instance alignment network
CN111967006A (en) Adaptive black box anti-attack method based on neural network model
CN113657491A (en) Neural network design method for signal modulation type recognition
CN114708479B (en) Self-adaptive defense method based on graph structure and characteristics
CN113822328A (en) Image classification method for defending against sample attack, terminal device and storage medium
CN113254927B (en) Model processing method and device based on network defense and storage medium
CN110706270A (en) Self-adaptive scene binocular stereo matching method based on convolutional neural network
CN111178504B (en) Information processing method and system of robust compression model based on deep neural network
CN113962281A (en) Unmanned aerial vehicle target tracking method based on Siamese-RFB
CN112580728A (en) Dynamic link prediction model robustness enhancing method based on reinforcement learning
CN113807214B (en) Small target face recognition method based on deit affiliated network knowledge distillation
CN117115547A (en) Cross-domain long-tail image classification method based on self-supervision learning and self-training mechanism
CN115906980A (en) GAT (goal-oriented programming) graph neural network defense method, construction method and pedestrian detection method
CN109919235B (en) Deep learning image classification model training method based on manual intervention sample set weight
CN109063940B (en) Intelligent vehicle threat estimation system and method based on variable structure Bayesian network
CN113487870B (en) Anti-disturbance generation method for intelligent single intersection based on CW (continuous wave) attack
CN115294398A (en) SAR image target recognition method based on multi-attitude angle joint learning
CN115510986A (en) Countermeasure sample generation method based on AdvGAN
CN113920159A (en) Infrared aerial small target tracking method based on full convolution twin network
Yang et al. UP-Net: unique keyPoint description and detection net

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant