CN115659387A - Neural-channel-based user privacy protection method, electronic device and medium - Google Patents

Neural-channel-based user privacy protection method, electronic device and medium Download PDF

Info

Publication number
CN115659387A
CN115659387A CN202211184386.7A CN202211184386A CN115659387A CN 115659387 A CN115659387 A CN 115659387A CN 202211184386 A CN202211184386 A CN 202211184386A CN 115659387 A CN115659387 A CN 115659387A
Authority
CN
China
Prior art keywords
graph
neural
backbone
network model
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211184386.7A
Other languages
Chinese (zh)
Inventor
陈晋音
黄国瀚
宣琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202211184386.7A priority Critical patent/CN115659387A/en
Publication of CN115659387A publication Critical patent/CN115659387A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Storage Device Security (AREA)

Abstract

The invention discloses a user privacy protection method based on a neural channel, which comprises the following steps: (1) Carrying out neuron testing on training data by using a pre-trained graph convolution network model to obtain a key neural pathway; (2) Acquiring a significant graph structure related to the performance of a main task, thereby extracting a backbone graph; (3) Generating a non-backbone map positively correlated with the main task based on the acquired key neural pathways; (4) Respectively extracting nodes of a backbone graph and a non-backbone graph to be embedded by using a pre-trained graph convolution network model; (5) And carrying out weighted combination on the obtained node embedding corresponding to the backbone graph and the non-backbone graph to obtain the privacy robust node embedding. The method can effectively reduce the privacy disclosure risk of the graph neural network model, improve the robustness of the graph neural network model to privacy reasoning attack, and enhance the protection capability of the graph neural network model to privacy data.

Description

Neural-channel-based user privacy protection method, electronic device and medium
Technical Field
The invention belongs to the technical field of network security, and particularly relates to a neural-channel-based user privacy protection method, electronic equipment and medium.
Background
With the development of the internet, massive data are generated all the time. The data is usually a graph structure data such as a social network, a traffic network, a financial transaction network, and the like, taking the social network as an example, the nodes are usually some social software users or organizations, and the edges may be friend relationships existing between the user nodes. For the data, under the condition of less manual labeling work, a key problem is how to efficiently reasonably analyze and infer massive graph structure data and put the graph structure data into actual production. Graph structure data tends to be irregular compared to image data, so that it is difficult for a convolution network and the like, which are common in the image field, to effectively process the graph structure data, and a graph neural network has been spotlighted and successful in the graph representation learning field due to its strong graph structure relationship extraction capability. The graph neural network can map the graph structure data to a low-dimensional space so as to perform processing (such as node classification, link prediction, community discovery and the like) of downstream application, and finally apply the graph structure data to an actual system (such as a recommendation system).
With the development of research on the graph neural network, the privacy security problem of the graph neural network is more and more emphasized by researchers. Research shows that the existing graph neural network has the risk of privacy disclosure, namely an attacker can deduce sensitive information in graph data through low-dimensional features extracted by the graph neural network, so as to acquire privacy data. In real life, the consequences of such privacy disclosure are serious, for example, a fraudster can deduce the key privacy of a user (for example, the income of the user, the transaction relationship between the user and the user, and the like) through the prediction result of the financial model, so as to steal the personal information of the user, and further implement illegal behaviors such as targeted telecommunication fraud and the like, and endanger social security. Therefore, the risk of privacy disclosure brings hidden danger to daily life of people. How to improve the protection capability of the private data of the model also becomes a research hotspot.
In order to solve the above problems, researchers have proposed different defense strategies, such as introducing a differential privacy mechanism, and misleading attackers by adding differential noise to data so that they cannot accurately infer privacy data, but this method may adversely affect the performance of the model; in addition, antagonistic noise is introduced in the training process, the balance is carried out on the aspects of main task performance and privacy disclosure risk, and the method also has a certain influence on the model performance. Therefore, how to effectively defend privacy stealing attack of the graph neural network and keep model performance as far as possible has important practical significance for improving privacy security, data protection capability and credibility in an actual system of the graph neural network.
Disclosure of Invention
In view of the risk of privacy disclosure of the neural network model, the invention provides a user privacy protection method based on a neural channel in order to improve the robustness of the neural network model to privacy stealing attacks and improve the credibility of the neural network model and the protection capability of the neural network model to privacy data. The method reasonably transforms the input graph data, and hides the original graph structure information in the node embedding extracted by the model, thereby improving the robustness of the graph neural network to privacy stealing attack, and further ensuring the safety of the key privacy of the user on the premise of keeping the performance of the graph neural network model.
In order to realize the invention, the technical scheme provided by the invention is as follows: a first aspect of an embodiment of the present invention provides a method for protecting user privacy based on a neural pathway, where the method specifically includes the following steps:
step 1, pre-training a neural network model of a graph, performing neuron test and statistics on training data, and obtaining a key neural pathway related to a node label;
step 2, fixing the parameters of the graph convolution network model pre-trained in the step 1, adding a mask matrix M on the graph adjacency matrix, and training the mask matrix M by using a gradient descent algorithm to obtain a final mask matrix M final And according to a preset threshold value delta, from the final mask matrix M final Selecting Q connecting edges as a backbone diagram A b
Step 3, obtaining the L-th layer hidden representation P of the node based on the graph convolution network model pre-trained in the step 1 L Decoding the hidden representation into an approximate adjacency matrix A', constructing a loss function based on the key neural pathways obtained in step 1, and based on the approximate adjacencyCalculating a continuous edge gradient matrix by the matrix A' and a loss function, and selecting the front Q continuous edges with the maximum gradient value from the continuous edge gradient matrix as a non-backbone graph A n
Step 4, extracting node embedding from the backbone graph obtained in the step 2 and the non-backbone graph obtained in the step 3 by using the graph neural network model pre-trained in the step 1;
and 5, adjusting the weight, and performing weighted combination on the node embedding corresponding to the backbone graph and the non-backbone graph obtained in the step 4 to obtain the node embedding with privacy robustness.
A second aspect of embodiments of the present invention provides an electronic device, comprising a memory and a processor, the memory being coupled to the processor; wherein the memory is configured to store program data, and the processor is configured to execute the program data to implement the neural pathway based user privacy protection method described above.
A third aspect of embodiments of the present invention provides a computer-readable storage medium on which a computer program is stored, which, when executed by a processor, implements the neural-pathway-based user privacy protection method described above.
Compared with the prior art, the invention has the beneficial effects that at least:
the invention provides a user privacy protection method based on a neural pathway, which comprises the following steps of firstly, carrying out neuron test through a pre-trained graph convolution network model to obtain a key neural pathway related to a node label; secondly, constructing a graph interpretation model, acquiring a significant graph structure related to the performance of the main task, and extracting a backbone graph; thirdly, constructing a graph generator model based on the key neural pathway to generate a non-backbone graph; then, embedding and extracting the backbone diagram and the given backbone diagram respectively by using a pre-training model; and finally, carrying out weighted combination on the obtained embedding corresponding to the backbone graph and the non-backbone graph to obtain the privacy robust node embedding. The robustness of the graph neural network model to privacy stealing attack is improved, the credibility of the graph neural network model and the protection capability of the graph neural network model to privacy data are improved,
drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is an overall framework schematic diagram of a neural pathway-based user privacy protection method.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings.
Referring to fig. 1, an embodiment of the present invention provides a neural-pathway-based user privacy protection method, including the following steps:
step 1: and pre-training the neural network model of the graph, performing neuron test and statistics on the training data, and obtaining a key neural path related to the node label.
As shown in fig. 1, in the present example, the graph neural network model is a graph convolution network model. Firstly, a pre-training graph convolution model is used for carrying out neuron testing on a testing node to obtain a key neural channel. Taking the L-layer graph convolution network model as an example, the last layer of key neurons associated with the node t can be defined as:
Figure BDA0003866772740000031
wherein the content of the first and second substances,
Figure BDA0003866772740000032
and outputting the neurons of the node t in the L-th layer graph convolution network model.
Figure BDA0003866772740000033
Express get
Figure BDA0003866772740000034
K of (1) L Individual neuron, outputs Z for the neuron of the L-th layer (L ≦ L) l Can be expressed as:
Figure BDA0003866772740000041
where A is the adjacency matrix of the input graph, I N For a diagonal matrix, N represents the number of nodes in the graph.
Figure BDA0003866772740000042
Is A + I N Degree matrix of (P) l Hidden representation/node embedding for the l-th layer (P) 0 = X, X is input node characteristic), W l-1 Is the model weight matrix of layer l-1.
Layer I neurons can be defined as:
Figure BDA0003866772740000043
obtaining the neural pathway P by the formula (1) and the formula (3) t =[k L ,k l-1 ,...,k 1 ]. Carrying out neuron test on training data according to the labels, counting the number of nodes with the same neural pathways activated in the same label node set, and selecting the neural pathway with the most nodes as a key neural pathway related to the labels
Figure BDA0003866772740000044
Where C is the number of classes of nodes in the graph.
Step 2, fixing the parameters of the graph convolution network model pre-trained in the step 1, adding a mask matrix M on the graph adjacency matrix, and training the mask matrix M by using a gradient descent algorithm to obtain a final mask matrix M final And according to a preset threshold value delta, from the final mask matrix M final Selecting Q connecting edges as a backbone graph A b
As shown in fig. 1, firstly, the parameters of the convolution network model of the pre-training graph are fixed, and a trainable mask matrix M is added to the graph adjacency matrix, so that the training target of the mask matrix M is:
Figure BDA0003866772740000045
where MI (-) represents mutual information, f θ * As parameter θ * Fixed pre-trained convolution network model, A S And X S Adjacent matrices of r-order subgraphs of the node t and corresponding feature matrices thereof respectively, Y is a prediction result of the model, indicating element-by-element multiplication, and H (-) is information entropy (H (Y) is a constant). mse (-) represents the mean square error function, zl,
Figure BDA0003866772740000046
the input adjacency matrices are respectively A S ,A S As an M, the graph convolution network model outputs the neuron in the layer I.
The purpose of equation (4) is to make the training target of the mask matrix M have two partial meanings, i.e., (1) maximize the input adjacency matrix as A, respectively S ,A S Mutual information between model outputs when the model is M ensures that the adjacent matrix after the model inputs pass through the mask keeps the performance of the main task as much as possible; (2) minimizing the output difference for each layer of neurons even if the input is A S An enabled and input of an S There are similarly activated neural pathways.
Training a mask matrix M by using a gradient descent algorithm and processing the mask matrix M as follows:
M final =sigmoid(M) (5)
obtaining the final mask matrix M final And according to a preset threshold value delta, from M final Selecting Q connecting edges as a backbone graph A b
Figure BDA0003866772740000051
Step 3, obtaining the L-th layer hidden representation P of the node based on the graph convolution network model pre-trained in the step 1 L Will beDecoding the hidden representation into an approximate adjacency matrix A ', constructing a loss function based on the key neural pathway obtained in the step 1, calculating a continuous edge gradient matrix based on the approximate adjacency matrix A' and the loss function, and selecting front Q continuous edges with the maximum gradient value from the continuous edge gradient matrix as a non-backbone graph A n
As shown in FIG. 1, the original graph (A, X) is input into the graph convolution network model which is pre-trained, and the L-th layer hidden representation P of the node is obtained L . It is first decoded to approximate adjacency matrix a':
A'=sigmoid(P L ·(P L ) T ) (7)
wherein (·) T Is a transpose operation.
The key neural pathway P to be obtained is then c Constructing neuron label vectors according to the number of layers of the model
Figure BDA0003866772740000052
Wherein, K l To (1)
Figure BDA0003866772740000053
The vitamin is 1, the rest is 0. The loss function is further constructed as follows:
Figure BDA0003866772740000054
wherein C is the number of classes of nodes in the graph, Y nc Real tag, Y ', representing n-th node in class c' nc Represents the predicted output value, | F, of the n-th node in the c-th class by the graph neural network model l |,l∈[1,...,L]Number of layer I neurons was modeled.
And further calculating a continuous gradient matrix, wherein the formula is as follows:
Figure BDA0003866772740000061
then selecting the front Q connecting edges with the maximum gradient value as a non-backbone graph A positively correlated with the main task n Wherein Q is the number of connected edges in the backbone graph obtained in step 2).
4) The pre-training model carries out node embedding extraction on the backbone graph and the non-backbone graph;
as shown in fig. 1, the backbone map a obtained according to step 2) and step 3) b And non-backbone diagram A n Separately aligning the backbone graph A by using a pre-trained graph convolution network model b And non-backbone diagram A n Extracting the embedding P of the node t b And P n The formula is as follows:
Figure BDA0003866772740000062
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003866772740000063
is a parameter theta * A fixed pre-trained graph convolution network model, with X being an input node feature.
5) The node embedding of privacy robustness is obtained by weighted combination node embedding;
as shown in fig. 1, embedding P for the node obtained in step 4) b And P n Carrying out weighted combination to obtain node embedding of privacy robustness:
P robust =(1-η)P b +ηP n (11)
where η is a weighting factor. Adjusting the value of η controls the presence of a backbone map and a non-backbone map at P robust The specific gravity of (1) to adjust the intensity of privacy protection, and when eta is larger, the privacy protection capability is stronger.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above embodiments are only used for illustrating the design idea and features of the present invention, and the purpose of the present invention is to enable those skilled in the art to understand the content of the present invention and implement the present invention accordingly, and the protection scope of the present invention is not limited to the above embodiments. Therefore, all equivalent changes and modifications made in accordance with the principles and concepts disclosed herein are intended to be included within the scope of the present invention.

Claims (10)

1. A user privacy protection method based on a neural pathway is characterized by specifically comprising the following steps:
step 1, pre-training a neural network model of a graph, performing neuron test and statistics on training data, and obtaining a key neural pathway related to a node label;
step 2, fixing the parameters of the graph convolution network model pre-trained in the step 1, adding a mask matrix M on the graph adjacency matrix, and training the mask matrix M by using a gradient descent algorithm to obtain a final mask matrix M final And according to a preset threshold value delta, from the final mask matrix M final Selecting Q connecting edges as a backbone graph A b
Step 3, obtaining the L-th layer hidden representation P of the node based on the graph convolution network model pre-trained in the step 1 L Decoding the hidden representation into an approximate adjacency matrix A ', constructing a loss function based on the key neural pathway obtained in the step 1, calculating a continuous gradient matrix based on the approximate adjacency matrix A' and the loss function, and selecting front Q continuous edges with the maximum gradient value from the continuous gradient matrix as a non-backbone graph A n
Step 4, extracting node embedding from the backbone graph obtained in the step 2 and the non-backbone graph obtained in the step 3 by using the graph neural network model pre-trained in the step 1;
and 5, adjusting the weight, and performing weighted combination on the node embedding corresponding to the backbone graph and the non-backbone graph obtained in the step 4 to obtain the node embedding with privacy robustness.
2. The neural-pathway-based user privacy protection method according to claim 1, wherein the graph neural network model is a graph convolution network model.
3. The method for protecting user privacy based on neural pathways according to claim 2, wherein the step 1 specifically comprises:
the last layer of key neurons associated with node t is defined as:
Figure FDA0003866772730000011
wherein the content of the first and second substances,
Figure FDA0003866772730000012
outputting the neurons of the node t in the L-th layer graph convolution network model;
Figure FDA0003866772730000013
express get
Figure FDA0003866772730000014
K of (1) L A neuron outputting Z for the neuron of the l-th layer l Expressed as:
Figure FDA0003866772730000021
where A is the adjacency matrix of the input graph, I N For a diagonal matrix, N represents the number of nodes in the graph;
Figure FDA0003866772730000022
is A + I N Degree matrix of (P) l For hidden representation/node embedding at layer I, W l-1 Model weight matrix of l-1 layer;
layer I neurons can be defined as:
Figure FDA0003866772730000023
obtaining the neural pathway P by the formula (1) and the formula (3) t =[k L ,k l-1 ,...,k 1 ](ii) a Carrying out neuron test on training data according to the label, counting the number of nodes with the same activation neural pathway in the same label node set, and selecting the neural pathway with the most nodes as a key neural pathway related to the label
Figure FDA0003866772730000024
Where C is the number of classes of nodes in the graph.
4. The neural-pathway-based user privacy protection method according to claim 2, wherein the step 2 is specifically:
firstly, fixing parameters of the graph neural network model pre-trained in the step 1, and adding a trainable mask matrix M on a graph adjacency matrix, wherein the training target of the mask matrix M is as follows:
Figure FDA0003866772730000025
wherein MI (-) represents mutual information,
Figure FDA0003866772730000028
is a parameter theta * Fixed pre-trained graph convolution network model, A S And X S Respectively are an adjacent matrix of an r-order subgraph of the node t and a characteristic matrix corresponding to the adjacent matrix, Y is a prediction result of the model, which indicates element-by-element multiplication, and H (-) is information entropy; mse (-) represents a mean square error function,
Figure FDA0003866772730000026
the input adjacency matrices are respectively A S ,A S The I layer neuron of the graph convolution network model outputs when the M is zero;
training a mask matrix M by using a gradient descent algorithm, and obtaining a final mask matrix M after training final And according to a preset threshold value delta, from M final Selecting Q connecting edges as a backbone graph A b The formula is as follows:
Figure FDA0003866772730000027
5. the neural-pathway-based user privacy protection method according to claim 2, wherein the step 3 is specifically:
inputting the original pictures (A, X) into a pre-trained graph convolution network model to obtain the L-th layer hidden representation P of the node L Wherein A is an adjacency matrix of the input graph, and X is the input node characteristic; it is first decoded to approximate adjacency matrix a':
A'=sigmoid(P L ·(P L ) T ) (7)
wherein (·) T Is a transpose operation.
Then the key neural pathway P obtained in step 1 c Constructing neuron label vectors according to the number of layers of the model
Figure FDA0003866772730000031
Wherein, K l To (1) a
Figure FDA0003866772730000032
The vitamin is 1, and the rest is 0; the loss function is further constructed as follows:
Figure FDA0003866772730000033
wherein C is the number of classes of nodes in the graph, Y nc True tag, Y 'representing nth node in class c' nc Represents the predicted output value, | F, of the n-th node in the c-th class by the graph neural network model l |,l∈[1,...,L]Number of layer i neurons for the model;
and (3) calculating a continuous gradient matrix, wherein the formula is as follows:
Figure FDA0003866772730000034
then selecting the front Q strip connecting edge with the maximum gradient value as a non-backbone graph A n Wherein Q is a link in the backbone diagram obtained in step 2The number of edges.
6. The neural-pathway-based user privacy protection method according to claim 2, wherein the step 4 specifically comprises:
respectively carrying out the step 2 on the obtained backbone graph A by using the graph neural network model pre-trained in the step 1 b And the non-backbone map A obtained in step 3 n And (3) extracting node embedding, wherein the formula is as follows:
Figure FDA0003866772730000035
wherein the content of the first and second substances,
Figure FDA0003866772730000036
is a parameter theta * A fixed pre-trained graph convolution network model, with X being an input node feature.
7. The method according to claim 2, wherein in step 5, the nodes corresponding to the backbone graph and the non-backbone graph obtained in step 4 are embedded with P b And P n Carrying out weighted combination to obtain node embedded P with privacy robustness robust The calculation formula of (a) is as follows:
P robust =(1-η)P b +ηP n (11)
where η is a weighting factor.
8. The method as claimed in claim 7, wherein the greater the weighting factor η, the greater the privacy protection capability of the node with privacy robustness.
9. An electronic device comprising a memory and a processor, wherein the memory is coupled to the processor; wherein the memory is configured to store program data and the processor is configured to execute the program data to implement the neural pathway based user privacy preserving method of any one of claims 1-8.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a neural pathway-based user privacy protection method as set forth in any one of claims 1-8.
CN202211184386.7A 2022-09-27 2022-09-27 Neural-channel-based user privacy protection method, electronic device and medium Pending CN115659387A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211184386.7A CN115659387A (en) 2022-09-27 2022-09-27 Neural-channel-based user privacy protection method, electronic device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211184386.7A CN115659387A (en) 2022-09-27 2022-09-27 Neural-channel-based user privacy protection method, electronic device and medium

Publications (1)

Publication Number Publication Date
CN115659387A true CN115659387A (en) 2023-01-31

Family

ID=84984919

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211184386.7A Pending CN115659387A (en) 2022-09-27 2022-09-27 Neural-channel-based user privacy protection method, electronic device and medium

Country Status (1)

Country Link
CN (1) CN115659387A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116484430A (en) * 2023-06-21 2023-07-25 济南道图信息科技有限公司 Encryption protection method for user privacy data of intelligent psychological platform

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116484430A (en) * 2023-06-21 2023-07-25 济南道图信息科技有限公司 Encryption protection method for user privacy data of intelligent psychological platform
CN116484430B (en) * 2023-06-21 2023-08-29 济南道图信息科技有限公司 Encryption protection method for user privacy data of intelligent psychological platform

Similar Documents

Publication Publication Date Title
Chen et al. Phishing scams detection in ethereum transaction network
Zhang et al. A multiple-layer representation learning model for network-based attack detection
WO2021189364A1 (en) Method and device for generating adversarial image, equipment, and readable storage medium
Almarashdeh et al. An overview of technology evolution: Investigating the factors influencing non-bitcoins users to adopt bitcoins as online payment transaction method
CN113378160A (en) Graph neural network model defense method and device based on generative confrontation network
WO2023070696A1 (en) Feature manipulation-based attack and defense method for continuous learning ability system
Gong et al. Deepfake forensics, an ai-synthesized detection with deep convolutional generative adversarial networks
Karanam et al. Intrusion detection mechanism for large scale networks using CNN-LSTM
Lu et al. Defense against backdoor attack in federated learning
Liu et al. AGRM: attention-based graph representation model for telecom fraud detection
Lata et al. A comprehensive survey of fraud detection techniques
CN115659387A (en) Neural-channel-based user privacy protection method, electronic device and medium
Rahmadeyan et al. Phishing Website Detection with Ensemble Learning Approach Using Artificial Neural Network and AdaBoost
Adeyemo et al. Stain: Stealthy avenues of attacks on horizontally collaborated convolutional neural network inference and their mitigation
Takahashi et al. Breaching FedMD: image recovery via paired-logits inversion attack
Dasari et al. An effective classification of DDoS attacks in a distributed network by adopting hierarchical machine learning and hyperparameters optimization techniques
Zhang et al. Conditional generative adversarial network-based image denoising for defending against adversarial attack
CN116188439A (en) False face-changing image detection method and device based on identity recognition probability distribution
Nazih et al. Fast detection of distributed denial of service attacks in VoIP networks using convolutional neural networks
Ma et al. DIHBA: Dynamic, invisible and high attack success rate boundary backdoor attack with low poison ratio
CN115438751A (en) Block chain phishing fraud identification method based on graph neural network
Zheng et al. GONE: A generic O (1) NoisE layer for protecting privacy of deep neural networks
Li et al. Hybrid graph-based Sybil detection with user behavior patterns
Wang et al. GhostEncoder: Stealthy backdoor attacks with dynamic triggers to pre-trained encoders in self-supervised learning
Yang et al. Fast Generation-Based Gradient Leakage Attacks: An Approach to Generate Training Data Directly From the Gradient

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination