CN114445639A - Dual self-attention-based dynamic graph anomaly detection method - Google Patents

Dual self-attention-based dynamic graph anomaly detection method Download PDF

Info

Publication number
CN114445639A
CN114445639A CN202210014102.3A CN202210014102A CN114445639A CN 114445639 A CN114445639 A CN 114445639A CN 202210014102 A CN202210014102 A CN 202210014102A CN 114445639 A CN114445639 A CN 114445639A
Authority
CN
China
Prior art keywords
attention
dusag
vertex
self
graph
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210014102.3A
Other languages
Chinese (zh)
Inventor
包先雨
方凯彬
李俊杰
程立勋
林伟钦
王歆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Shenzhen Academy of Inspection and Quarantine
Shenzhen Customs Information Center
Original Assignee
Shenzhen University
Shenzhen Academy of Inspection and Quarantine
Shenzhen Customs Information Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University, Shenzhen Academy of Inspection and Quarantine, Shenzhen Customs Information Center filed Critical Shenzhen University
Priority to CN202210014102.3A priority Critical patent/CN114445639A/en
Publication of CN114445639A publication Critical patent/CN114445639A/en
Priority to PCT/CN2022/112655 priority patent/WO2023130727A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a dynamic graph abnormity detection method based on double self-attention, which comprises the following steps: s1: DuSAG extracts the structural features and time sequence features of the dynamic graph to detect abnormal edges; s2: DuSAG applies structure self-attention to a vertex sequence obtained by random walk sampling of the graph, so that DuSAG can pay attention to more important vertices in the vertex sequence to enhance the structural feature learning of the graph; s3: DuSAG applies the time sequence self-attention on vertex embedding of different time stamps, so that DuSAG can capture the evolution mode of the vertex and learn the time sequence characteristics of the graph. The method introduces a structure self-attention mechanism, focuses on more important vertexes, and enhances the extraction of structural features compared with NetWalk. DuSAG introduces time sequence self-attention, learns the evolution mode of vertexes, and extracts time sequence characteristics. DuSAG has better effect in detecting abnormal data and double self-attention in the effectiveness of abnormal detection.

Description

Dynamic graph anomaly detection method based on double self-attention
Technical Field
The invention belongs to the technical field of dynamic graph abnormity detection, and particularly relates to a dynamic graph abnormity detection method based on double self-attention.
Background
Dynamic map anomaly detection is an important research direction in the map field. Anomalies for the dynamic graph include: vertex exceptions, edge exceptions, and subgraph exceptions. Many applications of dynamic graphs use edges to represent complex topologies and timing characteristics. Therefore, the detection of abnormal edges is a key part of the dynamic graph abnormality detection technology. The detection of abnormal edges in the dynamic graph has wide application, such as intrusion detection systems, social network fraud detection and the like. By mining the anomalies in the dynamic graph, some safety accidents can be avoided, and economic losses are avoided or reduced.
Graph-embedded models are models that can map vertices, edges, or subgraphs on a graph to a new vector space. In the new vector space, embedding can express different attributes according to different graph embedding methods, and the embedded learning is free from manual intervention. In large complex graphs, the graph embedding model has better performance than the traditional heuristic method. Since the graph embedding model has excellent performance, there are many studies to extract features of a graph based on a graph embedding method and to perform anomaly detection using the extracted features.
NetWalk is one of the classic and commonly used graph-embedding-based dynamic graph anomaly detection algorithms. NetWalk may dynamically update the network representation as the dynamic graph is updated and use the updated network representation for dynamic graph anomaly detection. NetWalk first encodes the vertices of the dynamic graph as vectors by a self-encoder with blob embedding, then minimizes the vertex-embedded distance in random walks, and reconstructs the error from the encoder as a global regularization term. Using the reservoir algorithm, the vector representation can be computed with constant spatial requirements. After learning vertex embedding, a clustering-based technique is employed to incrementally and dynamically detect network anomalies.
The prior art has the following defects: NetWalk is an important study of the direction of anomaly detection of dynamic images, but has two problems: when the vertex sequence obtained by random walk is long, NetWalk treats each vertex on a random walk set equally, and the performance of the method is reduced because more important vertices are not concerned; (II), sampling new vertex sequences for training using a random walk algorithm, which can make it difficult to capture the evolutionary patterns of vertices. These two problems affect the abnormality detection performance to some extent. Therefore, we propose a dynamic graph anomaly detection method based on double self-attention to solve the above mentioned problems in the background art.
Disclosure of Invention
The present invention is directed to a dynamic graph anomaly detection method based on dual self-attention, so as to solve the problems mentioned in the background art.
In order to achieve the purpose, the invention provides the following technical scheme: a dynamic graph abnormity detection method based on double self-attention comprises the following steps:
s1: DuSAG extracts the structural features and time sequence features of the dynamic graph to detect abnormal edges;
s2: DuSAG applies structure self-attention to a vertex sequence obtained by random walk sampling of the graph, so that DuSAG can pay attention to more important vertices in the vertex sequence to enhance the structural feature learning of the graph;
s3: DuSAG applies the time sequence self-attention on vertex embedding of different time stamps, so that DuSAG can capture the evolution mode of the vertex and learn the time sequence characteristics of the graph.
The structure is used for extracting the structural characteristics of the graph by self attention, vertex sequences are sampled on the graph by using a random walk algorithm, each sequence can be seen as a 'sentence', and vertex embedding is obtained by using a natural language processing technology;
DuSAG firstly uses the random walk algorithm of Node2Vec to generate a vertex sequence, learns the structure characteristics by applying structure self-attention to a random walk set, and embeds the vertex into the characteristics of a local area of a capture graph by the structure self-attention;
formally, at time step t, in snapshot GtUpsampling random walks, DuSAG sampling psi walks per vertex, and setting the length of the random walk to L, GtOmega for random walktIndicating that for each walk, H is embedded by initialized verticestConvert each walk to an embedded QtAnd passes it to SelfAttention, as follows:
Ot=SelfAttention(Qt) (1);
Otis structure vertex embedding, which contains the structural features of the dynamic graph, DuSAG uses a proportional dot product form of attention, and queries, keys, and values are linear transformations of the input embedding, where W isq,Wk,WvIs a linear projection matrix, for each wander, selfatentence is defined as follows:
Figure BDA0003459165450000031
Figure BDA0003459165450000032
Zt=βt(QtWv) (4);
wherein, betatIs the attention weight matrix, DuSAG uses a multi-headed selfFocusing on the different subspaces of the input to capture the more complex patterns, O is obtained by concatenating the η results and the φ random walkst
Figure BDA0003459165450000033
The time sequence self-attention target is to capture an evolution mode of a vertex and extract time sequence characteristics; DuSAG applies self-attention to Embedded O of different snapshotst
Formally, for each walk, the calculation yields the embedded OtTaking vertex v in the walk and constructing O from w snapshotsvWhere w is the size of the window, for each vertex v, add position embedding to obtain O'v=Ov+ P, where P is the position embedding matrix, which is indexed by the absolute time step WpObtained ofpIs a position embedding matrix, and utilizes self-attention algorithm SelfAttention' to obtain vertex embedding Uv
Uv=SelfAttention′(O′v) (6);
Unlike SelfAttention, which focuses on the current timestamp and the previous timestamp, i.e., is future-shielded, SelfAttention' and SelfAttention differ only by the following equation:
Figure BDA0003459165450000041
defining the mask matrix M as:
Figure BDA0003459165450000042
when M isijWhen ═ infinity, the softmax activation function will get zero weight, which allows DuSAG to mask future timestamps, and after chronological self-attention, we get vertex-embedded U containing structural and chronological featuresv
The loss function for DuSAG is set as follows: DuSAG minimizes the distance between the vertexes in the same random walk, increases the distance between the vertexes which are not in the same random walk, and reduces the calculation complexity by using negative sampling;
formally, the loss function JtComprises the following steps:
Figure BDA0003459165450000043
wherein
Figure BDA0003459165450000044
Is the set of negatively sampled vertices of vertex v at time step t, b is the interval;
get vertex embedding UtThen, edge anomaly detection is performed, and DuSAG converts vertex embedding and edge set into edge embedding K by using an edge encoder phitA one-class neural network of a 3-layer fully-connected network, namely OCNN is used as an abnormality detection framework, and a forward propagation formula is as follows:
Figure BDA0003459165450000045
Figure BDA0003459165450000046
wherein
Figure BDA0003459165450000047
And
Figure BDA0003459165450000048
the weight matrix and the bias vector of the OCNN l-th layer and the abnormal score vector are respectively
Figure BDA0003459165450000049
The abnormal edge can be distinguished from the normal by a threshold value, and the loss function of the OCNN is:
Figure BDA00034591654500000410
the over parameters of the OCNN are v and r, and the number of data points passing through the hyperplane and the deviation of the hyperplane are respectively controlled.
Compared with the prior art, the invention has the beneficial effects that: according to the double self-attention-based dynamic graph anomaly detection method, the NetWalk treats all vertexes equally, so that the anomaly detection performance is reduced when the vertex sequence length is long. DuSAG introduces a structural self-attention mechanism, focuses on more important vertices, enhancing the extraction of structural features compared to NetWalk. (2) NetWalk has difficulty learning the evolutionary pattern of vertices. DuSAG introduces time sequence self-attention, learns the evolution mode of vertexes, and extracts time sequence characteristics. DuSAG has better effect in detecting abnormal data and double self-attention in the effectiveness of abnormal detection.
Drawings
Fig. 1 is a schematic flow chart of a double self-attention-based dynamic graph anomaly detection method according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The purpose of the invention is as follows: aiming at two problems existing in NetWalk, the invention provides a dynamic graph anomaly detection algorithm DuSAG based on double self-attention, and (1) the invention introduces a structure self-attention mechanism, focuses on more important vertexes, and enhances the learning of structural features compared with NetWalk. (2) The invention introduces time sequence self-attention, learns the evolution mode of the vertex, and enhances the learning of time sequence characteristics compared with NetWalk.
The technical problems to be solved by the invention are as follows: anomaly detection is the identification of events or observations in the data that do not match the expected pattern. A dynamic graph is a sequence of graphs in which the graphs change over time. Dynamic graph anomaly detection is the identification of anomalous data in a dynamic graph. The invention provides a dynamic graph abnormity detection method based on double self-attention. The invention aims to solve the technical problem of extracting structural features and time sequence features in a dynamic graph to carry out abnormal data mining. The ultimate goal addressed by the present invention is to mine anomalous data in a dynamic graph given the dynamic graph.
The invention provides a dynamic graph abnormity detection method based on double self-attention as shown in figure 1, which comprises the following steps:
s1: DuSAG extracts the structural features and time sequence features of the dynamic graph to detect abnormal edges;
s2: the DuSAG applies the structure self-attention to the vertex sequence obtained by random walk sampling of the graph, so that the DuSAG can focus on more important vertices in the vertex sequence, and the DuSAG can support longer random walk length so as to enhance the structural feature learning of the graph;
s3: DuSAG applies the time sequence self-attention to vertex embedding of different time stamps, so that DuSAG can capture the evolution mode of the vertex and learn the time sequence characteristics.
Intention and anomaly detection. FIG. 1 is a DuSAG flowsheet: from top to bottom, the method comprises the steps of original dynamic graphs, structure self-attention, time sequence self-attention, abnormal detection and abnormal edge detection results; the black arrows in Random Walk indicate the direction of Random Walk.
Fig. 1 shows the flow of DuSAG, which mainly comprises three parts: the structure self-attention, the time sequence self-attention and the abnormity detection are carried out, and the implementation processes of the three parts are described in detail next.
(one) structure self-attention: this section is performed to extract structural features of the drawings. The random walk method will typically use a random walk algorithm to sample the vertex sequence on the graph. Each sequence can be viewed as a "sentence" and vertex embedding can be achieved using natural language processing techniques. DuSAG first generates a sequence of vertices using Node2 Vec's random walk algorithm and learns structural features by applying structural self-attention to the random walk set. By structural self-attention, vertex embedding can capture features of local regions of the graph. Where some vertices in the wander tend to be more important than their vertices. By using self-attention, more important vertices can be focused on, thereby enhancing the learning of structural features.
Formally, at time step t, in snapshot GtUpsampling random walks. DuSAG samples ψ walks for each vertex and sets the length of the random walk to L. GtOmega for random walktAnd (4) showing. For each walk, embed H by initialized vertextConvert each walk to an embedded QtAnd passes it to SelfAttention, as follows:
Ot=SelfAttention(Qt) (1);
Otis the structure vertex embedding, which contains the structural features of the dynamic graph. DuSAG uses a proportional dot product form of attention. Queries, keys and values are linear transformations of the input embedding, where Wq,Wk,WvIs a linear projection matrix. For each wander, selfatentence is defined as follows:
Figure BDA0003459165450000071
Figure BDA0003459165450000072
Zt=βt(QtWv) (4);
wherein, betatIs the attention weight matrix. DuSAG uses multi-headed self-attention to focus on different subspaces of the input to capture more complex patterns. This also further enhances the ability of DuSAG to learn large-scale maps. By cascading η results and phi random walks, O is obtainedt
Figure BDA0003459165450000073
(II) timing self-attention: the aim of the part is to capture the evolution mode of the vertex and extract the time sequence characteristics. As shown in FIG. 1, DuSAG applies self-attention to the embedded O of different snapshotst
Formally, for each walk, the calculation yields the embedded Ot. Take vertex v in the walk and construct O from the w snapshotsvWhere w is the size of the window.
For each vertex v, add location embedding to obtain O'v=Ov+ P, where P is the position embedding matrix, which is indexed by the absolute time step WpObtained ofpIs a position embedding matrix. Obtaining vertex embedding U by using self-attention algorithm SelfAttentionv
Uv=SelfAttention′(O′v) (6);
Unlike SelfAttention, SelfAttention' focuses on the current timestamp and the previous timestamp, i.e., masks the future. SelfAttention' differs from SelfAttention only by the following equation:
Figure BDA0003459165450000081
defining the mask matrix M as:
Figure BDA0003459165450000082
when M isijWhen- ∞, the softmax activation function will get a zero weight, which allows DuSAG to mask future timestamps. After the time sequence self-attention, the vertex embedding U containing the structural characteristics and the time sequence characteristics can be obtainedv
(III) anomaly detection: the loss function for DuSAG is set as follows: DuSAG minimizes the distance between vertices in the same random walk, increasing the distance between vertices that are not in the same random walk. DuSAG uses negative sampling to reduce computational complexity, since the number of vertices leads to a complex computation of the loss function in the case of large data sets. Form(s) ofUpper, loss function JtComprises the following steps:
Figure BDA0003459165450000083
wherein
Figure BDA0003459165450000084
Is the set of negatively sampled vertices for vertex v at time step t, and b is the interval.
Get vertex embedding UtThen, edge anomaly detection is performed. DuSAG utilizes an edge encoder phi to convert vertex embedding and edge sets to edge embedding Kt. Adopting a one-class neural network (OCNN) of a 3-layer fully-connected network as an anomaly detection framework, wherein a forward propagation formula is as follows:
Figure BDA0003459165450000085
Figure BDA0003459165450000086
wherein
Figure BDA0003459165450000087
Wherein
Figure BDA0003459165450000088
And
Figure BDA0003459165450000089
the weight matrix and the bias vector of the OCNN l-th layer and the abnormal score vector are respectively
Figure BDA00034591654500000810
By means of the threshold value, abnormal edges can be distinguished from normal. The loss function of OCNN is:
Figure BDA00034591654500000811
the over parameters of the OCNN are v and r, and the number of data points passing through the hyperplane and the deviation of the hyperplane are respectively controlled. The framework for DuSAG is shown in Algorithm 1.
Figure BDA0003459165450000091
Figure BDA0003459165450000101
The invention patent provides a dynamic graph anomaly detection algorithm DuSAG based on double self-attention. And the DuSAG extracts the structural features and the time sequence features of the dynamic graph to detect abnormal edges. DuSAG applies structural self-attention to the random walk of the graph, enabling DuSAG to focus on the more important vertices in the random walk and better learn the structural features of the graph. DuSAG applies the time sequence self-attention on vertex embedding of different time stamps, so that DuSAG can capture the evolution mode of the vertex and learn the time sequence characteristics of the graph.
The method of the invention uses three real-world dynamic graph data sets as data to carry out repeated tests. The data set types encompass social networking graphs, paper author collaboration graphs, and computer networking graphs. Quantitative analysis is carried out through AUC indexes, on three data sets, the DuSAG method is improved by 16% compared with the NetWalk method on average, and the DuSAG data anomaly detection method has a good effect on detecting anomaly data and the dual self-attention anomaly detection effectiveness is demonstrated.
In summary, compared with the prior art, the invention has two advantages compared with NetWalk and DuSAG: (1) NetWalk treats all vertexes equally, which causes that the abnormal detection performance is reduced when the vertex sequence length is longer. DuSAG introduces a structural self-attention mechanism, focuses on more important vertices, enhancing the extraction of structural features compared to NetWalk. (2) NetWalk has difficulty learning the evolutionary pattern of vertices. DuSAG introduces time sequence self-attention, learns the evolution mode of the vertex and extracts time sequence characteristics. DuSAG has better effect in detecting abnormal data and double self-attention in the effectiveness of abnormal detection.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments or portions thereof without departing from the spirit and scope of the invention.

Claims (4)

1. A dynamic graph abnormity detection method based on double self-attention is characterized in that: the method comprises the following steps:
s1: DuSAG extracts the structural features and time sequence features of the dynamic graph to detect abnormal edges;
s2: DuSAG applies structure self-attention to a vertex sequence obtained by random walk sampling of the graph, so that DuSAG can pay attention to more important vertices in the vertex sequence to enhance the structural feature learning of the graph;
s3: DuSAG applies time sequence self-attention to vertex embedding of different time stamps, so that DuSAG can capture the evolution mode of the vertices and learn the time sequence characteristics of the graph.
2. The method according to claim 1, wherein the method comprises: the structure is used for extracting the structural characteristics of the graph by self attention, vertex sequences are sampled on the graph by using a random walk algorithm, each sequence can be seen as a 'sentence', and vertex embedding is obtained by using a natural language processing technology;
DuSAG firstly uses the random walk algorithm of Node2Vec to generate a vertex sequence, learns the structure characteristics by applying structure self-attention to a random walk set, and embeds the vertex into the characteristics of a local area of a capture graph by the structure self-attention;
formally, at time step t, in snapshot GtUpsampling random walks, DuSAG samples psi walks per vertex and sets the length of the random walkIs L, GtOmega for random walktIndicating that for each walk, H is embedded by initialized verticestConvert each walk to an embedded QtAnd passes it to SelfAttention, as follows:
Ot=SelfAttention(Qt)(1);
Otis structure vertex embedding, which contains the structural features of the dynamic graph, DuSAG uses a proportional dot product form of attention, and queries, keys, and values are linear transformations of the input embedding, where W isq,Wk,WyIs a linear projection matrix, for each wander, selfatentence is defined as follows:
Figure FDA0003459165440000011
Figure FDA0003459165440000021
Zt=βt(QtWy)(4);
wherein, betatIs an attention weight matrix, DuSAG uses multi-headed self-attention to focus on different subspaces of the input to capture more complex patterns, by cascading η results and φ random walks, we get Ot
Figure FDA0003459165440000022
3. The method according to claim 1, wherein the method comprises: the time sequence self-attention target is to capture an evolution mode of a vertex and extract time sequence characteristics; DuSAG applies self-attention to Embedded O of different snapshotst
Formally, for each walk, the calculation yields the embedded OtGet itVertex v in the walk, and construct O from the w snapshotsvWhere w is the size of the window, for each vertex v, add position embedding to obtain O'v=Ov+ P, where P is the position embedding matrix, which is indexed by the absolute time step WpObtained ofpIs a position embedding matrix, and utilizes self-attention algorithm SelfAttention' to obtain vertex embedding Uv
Uv=SelfAttention′(O′v)(6);
Unlike SelfAttention, which focuses on the current timestamp and the previous timestamp, i.e., is future-shielded, SelfAttention' and SelfAttention differ only by the following equation:
Figure FDA0003459165440000023
defining the mask matrix M as:
Figure FDA0003459165440000024
when M isijWhen ∞, the softmax activation function will obtain zero weight, which allows DuSAG to mask future timestamps, and after chronological self-attention, vertex embedding U containing structural and chronological features can be obtainedv
4. The method according to claim 1, wherein the method comprises: the loss function for DuSAG is set as follows: DuSAG minimizes the distance between the vertexes in the same random walk, increases the distance between the vertexes which are not in the same random walk, and reduces the calculation complexity by using negative sampling;
formally, the loss function JtComprises the following steps:
Figure FDA0003459165440000031
wherein
Figure FDA0003459165440000032
Is the set of negatively sampled vertices of vertex v at time step t, b is the interval;
get vertex embedding UtThen, edge anomaly detection is performed, and DuSAG converts vertex embedding and edge set into edge embedding K by using an edge encoder phitA one-class neural network of a 3-layer fully-connected network, namely OCNN is used as an abnormality detection framework, and a forward propagation formula is as follows:
Figure FDA0003459165440000033
Figure FDA0003459165440000034
wherein
Figure FDA0003459165440000035
Figure FDA0003459165440000036
And
Figure FDA0003459165440000037
the weight matrix and the bias vector of the OCNN l-th layer and the abnormal score vector are respectively
Figure FDA0003459165440000038
The abnormal edge can be distinguished from the normal by a threshold value, and the loss function of the OCNN is:
Figure FDA0003459165440000039
the hyper-parameters of OCNN are v and r, and the number of data points passing through the hyper-plane and the deviation of the hyper-plane are respectively controlled.
CN202210014102.3A 2022-01-06 2022-01-06 Dual self-attention-based dynamic graph anomaly detection method Pending CN114445639A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210014102.3A CN114445639A (en) 2022-01-06 2022-01-06 Dual self-attention-based dynamic graph anomaly detection method
PCT/CN2022/112655 WO2023130727A1 (en) 2022-01-06 2022-08-16 Dynamic graph anomaly detection method based on dual self-attention

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210014102.3A CN114445639A (en) 2022-01-06 2022-01-06 Dual self-attention-based dynamic graph anomaly detection method

Publications (1)

Publication Number Publication Date
CN114445639A true CN114445639A (en) 2022-05-06

Family

ID=81368666

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210014102.3A Pending CN114445639A (en) 2022-01-06 2022-01-06 Dual self-attention-based dynamic graph anomaly detection method

Country Status (2)

Country Link
CN (1) CN114445639A (en)
WO (1) WO2023130727A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023130727A1 (en) * 2022-01-06 2023-07-13 深圳市检验检疫科学研究院 Dynamic graph anomaly detection method based on dual self-attention

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117407697B (en) * 2023-12-14 2024-04-02 南昌科晨电力试验研究有限公司 Graph anomaly detection method and system based on automatic encoder and attention mechanism

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109844749B (en) * 2018-08-29 2023-06-20 区链通网络有限公司 Node abnormality detection method and device based on graph algorithm and storage device
CN113868474A (en) * 2021-09-02 2021-12-31 子亥科技(成都)有限公司 Information cascade prediction method based on self-attention mechanism and dynamic graph
CN114445639A (en) * 2022-01-06 2022-05-06 深圳市检验检疫科学研究院 Dual self-attention-based dynamic graph anomaly detection method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023130727A1 (en) * 2022-01-06 2023-07-13 深圳市检验检疫科学研究院 Dynamic graph anomaly detection method based on dual self-attention

Also Published As

Publication number Publication date
WO2023130727A1 (en) 2023-07-13

Similar Documents

Publication Publication Date Title
CN114169330B (en) Chinese named entity recognition method integrating time sequence convolution and transform encoder
CN114445639A (en) Dual self-attention-based dynamic graph anomaly detection method
Fu et al. Temporal self-attention-based Conv-LSTM network for multivariate time series prediction
Liu et al. Automatic well test interpretation based on convolutional neural network for infinite reservoir
Ou et al. A CNN framework with slow-fast band selection and feature fusion grouping for hyperspectral image change detection
CN114926746B (en) SAR image change detection method based on multiscale differential feature attention mechanism
CN112070058A (en) Face and face composite emotional expression recognition method and system
CN108256486B (en) Image identification method and device based on nonnegative low-rank and semi-supervised learning
CN107945210B (en) Target tracking method based on deep learning and environment self-adaption
CN111861756B (en) Group partner detection method based on financial transaction network and realization device thereof
CN114443909A (en) Dynamic graph anomaly detection method based on community structure
CN112765370A (en) Entity alignment method and device of knowledge graph, computer equipment and storage medium
Zeng et al. Multivariate time series anomaly detection with adversarial transformer architecture in the Internet of Things
Wang et al. Learning unsupervised node representation from multi-view network
Xiu et al. Deep canonical correlation analysis using sparsity-constrained optimization for nonlinear process monitoring
Kumar et al. An adaptive transformer model for anomaly detection in wireless sensor networks in real-time
CN114463340A (en) Edge information guided agile remote sensing image semantic segmentation method
CN111898565B (en) Forest smoke and fire real-time monitoring system and method based on robust multi-view
CN108764287A (en) Object detection method and system based on deep learning and grouping convolution
CN112464172A (en) Growth parameter active and passive remote sensing inversion method and device
Xu et al. TFG-Net: Tropical Cyclone Intensity Estimation from a Fine-grained perspective with the Graph convolution neural network
Wang et al. Learning matrix factorization with scalable distance metric and regularizer
CN116257786A (en) Asynchronous time sequence classification method based on multi-element time sequence diagram structure
Peng et al. Application of non-Gaussian feature enhancement extraction in gated recurrent neural network for fault detection in batch production processes
CN110210321B (en) Under-sample face recognition method based on multi-dimensional scale transformation network and block weighting method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination