CN115564044A - Graph neural network convolution pooling method, device, system and storage medium - Google Patents

Graph neural network convolution pooling method, device, system and storage medium Download PDF

Info

Publication number
CN115564044A
CN115564044A CN202211278442.3A CN202211278442A CN115564044A CN 115564044 A CN115564044 A CN 115564044A CN 202211278442 A CN202211278442 A CN 202211278442A CN 115564044 A CN115564044 A CN 115564044A
Authority
CN
China
Prior art keywords
node
neural network
graph
similarity
pooling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211278442.3A
Other languages
Chinese (zh)
Inventor
许浩
楼柯楠
周书悦
赵洪森
张蕊华
叶振
刘佳
朱鑫淼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lishui University
Original Assignee
Lishui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lishui University filed Critical Lishui University
Priority to CN202211278442.3A priority Critical patent/CN115564044A/en
Publication of CN115564044A publication Critical patent/CN115564044A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a graph neural network convolution pooling method, device and system and a storage medium. The graph neural network convolution pooling method is characterized by calculating the similarity between each node feature and other node features in a pre-constructed graph neural network, wherein the similarity is the accumulation of the absolute values of similarity measurement between the node features and the other node features; and obtaining the similarity of all the node features of the graph neural network to form a similarity vector, sequencing elements in the similarity vector, and obtaining k node features with the minimum similarity according to the sequencing, wherein k is a hyper-parameter and represents the number of the node features maintained after pooling of the current node features, and the k node features with the minimum similarity are the node features maintained after pooling. The pooling layer formed by the method is applied to the graph neural network with the tanh activation function, the node characteristics are selected through similarity sorting, negative value output which greatly contributes to a model is avoided being abandoned, and the performance of the graph neural network is optimized.

Description

Graph neural network convolution pooling method, device, system and storage medium
Technical Field
The invention relates to the technical field of graph neural network pooling, in particular to a graph neural network convolution pooling method, device, system and storage medium.
Background
Graphical Neural Networks (GNNs) have been successful in many areas, such as biomedicine, semantic segmentation, recommendation systems. The graph neural network can extract a good representation form from the graph structure data of the non-Euclidean space. Similar to Convolutional Neural Networks (CNNs), a typical graph neural network contains convolutional filters, activation functions, and pooling operators in order to extract features with non-linearities and invariants. Pooling is a key distinction between GNNs and CNNs because neither maximal pooling nor average pooling typical of CNNs is appropriate for non-euclidean data. Typical pooling techniques require that the data being processed be structured, i.e., satisfy the assumption that data within a neighborhood is strongly correlated. While graph structure data does not satisfy this assumption, pooling introduces key invariance, increasing the acceptance domain of the entire graph. Therefore, proper pooling techniques play an important role in mapping neural networks.
However, the activation function (e.g., tanh) that may output a negative value is used in the graph neural network, and the decision criterion in the existing global pooling method adopts the activation function that may output a negative value. In the existing global pooling method pooling process, for example, pooling by sorting pooling (a common example is topk pooling), the sorting pooling keeps large-value nodes, discards small-value nodes and negative-value nodes, and the used tanh activation function will generate many useful negative-value outputs, i.e., negative-value outputs with large absolute values. Exploring the number of these nodes that contribute much and little, respectively, to be discarded, we analyzed their number distribution in the graph neural network. From the analysis, we find that the proportion of "useless" nodes among all the discarded nodes is small, and the proportion of "useful" nodes among all the discarded nodes is larger. Therefore, the discarded information problem cannot be solved by a simple pooling of absolute rankings. And unfortunately, tanh cannot be replaced with a positive activation function (e.g., reLU 6) because this would result in performance degradation.
Disclosure of Invention
To solve the above technical problems, or to at least partially solve the above technical problems, the present invention provides a graph neural network convolution pooling method, apparatus, system, and storage medium.
In a first aspect, the present invention provides a convolutional pooling method of graph neural networks, including:
calculating the similarity between each node feature and other node features in the pre-constructed graph neural network, wherein the similarity is the accumulation of the absolute values of similarity measurement between the node features and other node features;
and obtaining the similarity of all the node features of the graph neural network to form a similarity vector, sequencing elements in the similarity vector, and obtaining k node features with the closest similarity according to the sequencing, wherein k is a hyper-parameter and represents the number of the node features maintained after pooling of the current node features, and the k node features with the closest similarity are the node features maintained after pooling.
Still further, the similarity metric includes: euclidean distance, inner product, cosine similarity and Mahalanobis distance.
Further, the pre-constructed graph neural network includes: a graph convolution layer, said graph convolution layer extracting local substructure characteristics of nodes, defining a consistent node ordering; the pooling layer unifies the node size by using a graph neural network convolution pooling method and applies invariance to the network; the activation layer gives the graph volume layer a previous network layer with nonlinear output results; a linear layer to infer classes of input graph neural network data.
Further, the process of map convolutional layer implementation includes:
by nuclear weight parameter to imageLinear transformation H via network node information (l) W (l) And mapping from dl channel to dl +1 channel; propagating linearly transformed node information to neighboring nodes and to the nodes themselves
Figure BDA0003897529480000021
Regularizing the ith row of the node information by multiplying a diagonal matrix to keep a fixed characteristic scale after the graph is convolved; performing a nonlinear transformation using a nonlinear activation function, namely:
Figure BDA0003897529480000022
wherein the content of the first and second substances,
Figure BDA0003897529480000023
a denotes the adjacency matrix of the diagram,
Figure BDA0003897529480000024
the adjacency matrix of the graph with the self-loop added,
Figure BDA0003897529480000025
is a matrix of angles of the light beam,
Figure BDA0003897529480000031
for the l-th layer trainable graph convolution kernel weight parameter, σ is a non-linear activation function, H (l) For graph neural network node information, when l =0, H (0) = X, X representing a set of node inputs of the graph neural network,
Figure BDA0003897529480000032
i.e. x i The method is characterized by comprising n nodes, wherein each node is a d-dimensional vector.
Further, the pre-pooling overall splicing feature is obtained based on the feature splicing mode of the pre-constructed graph neural network, and the overall splicing feature is splicing of graph convolution layer output features with output differences and is represented as H 0:L ∈R n×d ', where n is the node number of the graph,
Figure BDA0003897529480000033
l is the number of layers of the entire graph convolution network, H 0:L By node features
Figure BDA0003897529480000034
And (4) forming.
In a second aspect, the present invention provides a convolutional pooling system of neural networks, comprising: the node characteristic acquisition module acquires the node characteristics after graph convolution;
a similarity calculation module that calculates a similarity between each node feature and the remaining node features;
the sorting module is used for forming similarity vectors by the similarity of all the node features of the obtained graph neural network, sorting elements in the similarity vectors, and obtaining k node features with the closest similarity according to the sorting; and representing the number of the node features maintained after the current node features are pooled, wherein the k node features with the closest similarity are the node features maintained after the pooling.
Furthermore, the convolutional pooling system of the neural network of the graph further comprises a configuration module, and the parameters configured by the configuration module comprise a hyper-parameter k.
In a third aspect, the present invention provides a convolutional pooling device of a neural network, comprising: the convolutional pooling method comprises at least one processing unit, a storage unit and a bus unit, wherein the bus unit is connected with the processing unit and the storage unit, the storage unit stores at least one instruction, and the processing unit executes the instruction to realize the convolutional pooling method of the neural network of the graph.
In a fourth aspect, the present invention provides a storage medium for implementing a graph neural network convolution pooling method, wherein the storage medium stores a computer program, and the computer program, when executed by a processor connected to the energy storage converter through the driving circuit, implements the graph neural network convolution pooling method.
Compared with the prior art, the technical scheme provided by the embodiment of the invention has the following advantages:
the pooling mode of the invention is to calculate the similarity between each node feature and other node features in a pre-constructed graph neural network, wherein the similarity is the accumulation of absolute values of similarity measurement between the node features and other node features, the similarity of all node features of the graph neural network is obtained to form a similarity vector, elements in the similarity vector are sorted, k node features with the closest similarity are obtained according to the sorting, wherein k is a hyper-parameter and represents the number of node features maintained after pooling of the current node features, and the k node features with the smallest similarity are the node features maintained after pooling. The node feature selection is realized by taking the nonnegative similarity as the ranking index, the problem of useful information discarding in the pooling process in the neural network structure is avoided, and the performance of the optimized neural network can be effectively improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a flowchart of a convolutional pooling method of a graph neural network according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a convolutional pooling system of neural network according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a convolutional pooling device of a graph neural network according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
Graph pooling plays an important role in the graph node classification task. The activation function (such as tanh) which can output negative values is used in the graph neural network, and the decision criterion in the existing global pooling method adopts the activation function which can output negative values. In the existing global pooling method pooling process, a lot of information which greatly contributes to the graph neural network result is discarded; resulting in poor performance of the neural network of the graph. For example, in pooling via sorted pooling (a common example is topk pooling), sorted pooling keeps large-valued nodes, discards small-valued nodes and negative-valued nodes, and the tanh activation function used will produce many useful negative-valued outputs, i.e., negative-valued outputs with large absolute values. Exploring the number of these nodes that contribute much and little, respectively, to be discarded, we analyzed their number distribution in the graph neural network. From the analysis, we find that the proportion of "useless" nodes among all the discarded nodes is small, and the proportion of "useful" nodes among all the discarded nodes is larger. Therefore, the discarded information problem cannot be solved by a simple pooling of absolute rankings. And unfortunately, tanh cannot be replaced with a positive activation function (e.g., reLU 6) because this would result in performance degradation.
The pooling techniques of graph neural networks are divided into global pooling, topology-based pooling, and hierarchical pooling. Global pooling is a simple and efficient technique, and unlike topology-based pooling, global pooling is a feature-based pooling technique. The global pooling approach uses a summation or neural network to pool all representations of each layer of nodes. While the global pooling technique does not explicitly consider the topology of the input graph, previous graph convolution filters have considered the topology and implicitly affected subsequent pooling. Therefore, the present application focuses on global pooling with both computational efficiency, training performance, and versatility.
In order to keep more useful information and avoid the performance reduction of the training result of the graph neural network caused by discarding the useful information, a graph neural network convolution pooling method is provided.
Example 1
The embodiment of the invention provides a convolutional pooling method of a graph neural network, which comprises the following steps:
the method comprises the following steps of pre-constructing a DGCNN graph neural network, wherein the pre-constructed DGCNN graph neural network comprises the following steps:
a graph convolution layer that extracts local sub-structure features of nodes defining a consistent node ordering. The process of graph volume layer realization includes: linear transformation H for graph neural network node information through kernel weight parameters (l) W (l) And mapping from dl channel to dl +1 channel; propagating linearly transformed node information to neighboring nodes and to the nodes themselves
Figure BDA0003897529480000061
Regularizing the ith row of the node information by multiplying a diagonal matrix to keep a fixed characteristic scale after the graph is convolved; the nonlinear transformation is performed using a nonlinear activation function, namely:
Figure BDA0003897529480000062
wherein the content of the first and second substances,
Figure BDA0003897529480000063
a represents the adjacency matrix of the diagram, and the adjacency matrix is selectedStrengthening the topological links between nodes, dropping unimportant links,
Figure BDA0003897529480000064
the adjacency matrix of the graph with the self-loop added,
Figure BDA0003897529480000065
is a matrix of angles of the light beam,
Figure BDA0003897529480000066
the weighting parameter is checked for the first-level trainable graph convolution, sigma is a non-linear activation function, H (l) For graph neural network node information, when l =0, H (0) = X, X representing a set of node inputs of the graph neural network,
Figure BDA0003897529480000067
Figure BDA0003897529480000068
i.e. x i The method is characterized by comprising n nodes, wherein each node is a d-dimensional vector.
And the pooling layer unifies the sizes of the nodes by using a graph neural network convolution pooling method and applies invariance to the network. The pooling layer established by the method abandons the original pooling layer of the DGCNN network, and is constructed by the graph neural network convolution pooling method provided by the method.
Referring to fig. 1, the process of implementing the pooling layer includes: and calculating the similarity between each node feature and other node features in the pre-constructed graph neural network, wherein the similarity is the accumulation of the absolute values of similarity measurement between the node features and other node features.
In the specific implementation process, the global splicing characteristic before pooling is obtained based on the characteristic splicing mode (namely splicing of the output characteristics of the graph volume layer with the output difference) of the DGCNN graph neural network, and is represented as H 0:L ∈R n×d ', where n is the node number of the figure,
Figure BDA0003897529480000069
l is the number of layers of the entire graph convolution network, H 0:L By node features
Figure BDA0003897529480000071
And (4) forming. Calculating the similarity between each node feature and other node features:
Figure BDA0003897529480000072
and | represent a similarity measurement operator, and are used for calculating the similarity measurement and accumulating the absolute values of the similarity measurement to obtain the similarity.
Obtaining the similarity of all node features of the neural network of the graph to form a similarity vector
Figure BDA0003897529480000073
Sequencing elements in the similarity vector, and obtaining k node features with the minimum similarity according to the sequencing:
Figure BDA0003897529480000074
wherein, topk (-) is an operator for selecting k node features with minimum similarity in the similarity vector, and the selected node features are stored in Idx, k is a hyper-parameter and represents the number of the current node features maintained after pooling, and the k node features with minimum similarity are the node features maintained after pooling.
And the activation layer gives the graph volume layer a nonlinear output result to the previous network layer. In order to guarantee the performance of the graph neural network, the activation layer uses a tanh (hyperbolic tangent) activation function capable of generating a negative value output.
A linear layer to infer classes of input graph neural network data.
Training the DGCNN graph neural network applying the graph neural network convolution pooling method to obtain a DGCNN graph neural network model on a tudatset data set, and detecting the obtained DGCNN graph neural network model through the tudaset data set; compared with the original DGCNN graph neural network, the DGCNN graph neural network model obtained by training in the application is improved by 2.57% -6.14% in performance.
It should be noted that the DGCNN graph neural network is used as an exemplary illustration, and in an actual application process, a pooling layer formed based on the convolutional pooling method of the graph neural network of the present application may be replaced by a pooling layer of another graph neural network.
It should be noted that the training test of the created DGCNN neural network through the tudaset data set is only an exemplary illustration, and in the practical application process, other existing data sets or user-defined data sets may be used.
It should be noted that the similarity measure includes, but is not limited to: euclidean distance, inner product, cosine similarity and Mahalanobis distance.
Example 2
Referring to fig. 2, an embodiment of the present invention provides a convolutional pooling system of a neural network, including:
the configuration module is used for configuring parameters of the graph neural network, the configured parameters comprise a hyper-parameter k, and the hyper-parameter k is used for linearity.
The node characteristic acquisition module acquires the node characteristics after graph convolution;
a similarity calculation module that calculates a similarity between each node feature and the remaining node features;
the sorting module is used for forming similarity vectors by the similarity of all the node characteristics of the obtained graph neural network, sorting elements in the similarity vectors, and obtaining k node characteristics with the minimum similarity according to the sorting; and representing the number of the node features maintained after the current node features are pooled, wherein the k node features with the minimum similarity are the node features maintained after the pooling.
Example 3
Referring to fig. 3, an embodiment of the present invention provides a convolutional pooling device of a neural network, including: the convolutional pooling method comprises at least one processing unit, a storage unit and a bus unit, wherein the bus unit is connected with the processing unit and the storage unit, the storage unit stores at least one instruction, and the processing unit executes the instruction to realize the convolutional pooling method of the neural network of the graph.
Example 4
The embodiment of the invention provides a storage medium for realizing a graph neural network convolution pooling method, wherein the storage medium stores a computer program, and the computer program is used for realizing the graph neural network convolution pooling method when being executed by a processor connected with an energy storage converter through a driving circuit.
The pooling mode of the invention is to calculate the similarity between each node feature and other node features in a pre-constructed graph neural network, wherein the similarity is the accumulation of absolute values of similarity measurement between the node features and other node features, the similarity of all node features of the graph neural network is obtained to form a similarity vector, elements in the similarity vector are sorted, k node features with the minimum similarity are obtained according to the sorting, wherein k is a hyper-parameter and represents the number of node features maintained after pooling of the current node features, and the k node features with the minimum similarity are the node features maintained after pooling. The node feature selection is realized by taking the nonnegative similarity as the ranking index, the problem of useful information discarding in the pooling process in the neural network structure is avoided, and the performance of the optimized neural network can be effectively improved.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. A graph neural network convolution pooling method, comprising:
calculating the similarity between each node feature and other node features in the pre-constructed graph neural network, wherein the similarity is the accumulation of the absolute values of similarity measurement between the node features and other node features;
and obtaining the similarity of all the node features of the graph neural network to form a similarity vector, sequencing elements in the similarity vector, and obtaining k node features with the minimum similarity according to the sequencing, wherein k is a hyper-parameter and represents the number of the node features maintained after pooling of the current node features, and the k node features with the minimum similarity are the node features maintained after pooling.
2. The convolutional pooling of neural networks of claim 1, wherein said similarity metric comprises: euclidean distance, inner product, cosine similarity and Mahalanobis distance.
3. The convolutional pooling of neural networks of claim 1, wherein the pre-constructed neural network of the graph comprises: a graph convolution layer, said graph convolution layer extracting local substructure features of nodes, defining consistent node ordering; the pooling layer unifies the node size by using a graph neural network convolution pooling method and applies invariance to the network; the activation layer gives the graph volume layer a previous network layer with nonlinear output results; a linear layer to infer classes of input graph neural network data.
4. The graph neural network convolution pooling method of claim 3, wherein a graph convolution layer implementation procedure comprises:
linear transformation H for graph neural network node information through kernel weight parameters (l) W (l) And mapping from dl channel to dl +1 channel; propagating linearly transformed node information to neighboring nodes and to the nodes themselves
Figure FDA0003897529470000011
Regularizing the ith row of the node information by multiplying a diagonal matrix to keep a fixed characteristic scale after the graph is convolved; the nonlinear transformation is performed using a nonlinear activation function, namely:
Figure FDA0003897529470000012
wherein the content of the first and second substances,
Figure FDA0003897529470000013
a denotes the adjacency matrix of the diagram,
Figure FDA0003897529470000014
the adjacency matrix of the graph with the self-loop added,
Figure FDA0003897529470000015
is a matrix of angles of the light beam,
Figure FDA0003897529470000016
for the l-th layer trainable graph convolution kernel weight parameter, σ is a non-linear activation function, H (l) For graph neural network node information, when l =0, H (0) = X, X denotes a set of node inputs to the graph neural network,
Figure FDA0003897529470000017
i.e. x i The method is characterized by comprising n nodes, wherein each node is a d-dimensional vector.
5. The convolutional pooling method of a graph neural network of claim 3, wherein the global stitching feature before pooling is obtained based on a feature stitching manner of a pre-constructed graph neural network, the global stitching feature is a stitching of graph convolutional layer output features of output differences, and is represented as H 0:L ∈R n×d′ Where n is the node number of the graph,
Figure FDA0003897529470000021
l is the number of layers of the entire graph convolution network, H 0:L By node features
Figure FDA0003897529470000022
And (4) forming.
6. A graph neural network convolution pooling system, comprising: the node characteristic acquisition module acquires the node characteristics after graph convolution; a similarity calculation module that calculates a similarity between each node feature and the remaining node features; the sorting module is used for forming similarity vectors by the similarities of all the node characteristics of the obtained graph neural network, sorting elements in the similarity vectors, and obtaining k node characteristics with the minimum similarity according to the sorting; and representing the number of the node characteristics maintained after the current node characteristics are pooled, wherein the k node characteristics with the minimum similarity are the node characteristics maintained after the pooling.
7. The convolutional pooling of neural networks system of claim 6, further comprising a configuration module configured with parameters comprising a hyper-parameter k.
8. A graph neural network convolution pooling device, comprising: at least one processing unit, a storage unit and a bus unit, wherein the bus unit is connected with the processing unit and the storage unit, the storage unit stores at least one instruction, and the processing unit executes the instruction to realize the graph neural network convolution pooling method according to any one of claims 1 to 5.
9. A storage medium for implementing a convolutional pooling method of a neural network of a graph, said storage medium storing a computer program, wherein said computer program, when executed by a processor connected to an energy storage converter through a driving circuit, implements the convolutional pooling method of a neural network of a graph as claimed in any one of claims 1 to 5.
CN202211278442.3A 2022-10-19 2022-10-19 Graph neural network convolution pooling method, device, system and storage medium Pending CN115564044A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211278442.3A CN115564044A (en) 2022-10-19 2022-10-19 Graph neural network convolution pooling method, device, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211278442.3A CN115564044A (en) 2022-10-19 2022-10-19 Graph neural network convolution pooling method, device, system and storage medium

Publications (1)

Publication Number Publication Date
CN115564044A true CN115564044A (en) 2023-01-03

Family

ID=84747082

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211278442.3A Pending CN115564044A (en) 2022-10-19 2022-10-19 Graph neural network convolution pooling method, device, system and storage medium

Country Status (1)

Country Link
CN (1) CN115564044A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115995024A (en) * 2023-03-22 2023-04-21 成都理工大学 Image classification method based on class diagram neural network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115995024A (en) * 2023-03-22 2023-04-21 成都理工大学 Image classification method based on class diagram neural network

Similar Documents

Publication Publication Date Title
Tang et al. CGD: Multi-view clustering via cross-view graph diffusion
Malone et al. Data mining using rule extraction from Kohonen self-organising maps
CN110674407A (en) Hybrid recommendation method based on graph convolution neural network
CN111079780A (en) Training method of space map convolution network, electronic device and storage medium
Liu et al. Fast attributed multiplex heterogeneous network embedding
CN110993037A (en) Protein activity prediction device based on multi-view classification model
CN112862015A (en) Paper classification method and system based on hypergraph neural network
Shamsolmoali et al. High-dimensional multimedia classification using deep CNN and extended residual units
Chen et al. Symmetric low-rank representation for subspace clustering
CN112633481A (en) Multi-hop graph convolution neural network model and training method thereof
Xue et al. Optimizing ontology alignment through compact MOEA/D
Niu et al. Overlapping community detection with adaptive density peaks clustering and iterative partition strategy
CN114187308A (en) HRNet self-distillation target segmentation method based on multi-scale pooling pyramid
CN115564044A (en) Graph neural network convolution pooling method, device, system and storage medium
JP2015036939A (en) Feature extraction program and information processing apparatus
Pei et al. Texture classification based on image (natural and horizontal) visibility graph constructing methods
CN111652329B (en) Image classification method and device, storage medium and electronic equipment
US20230409871A1 (en) Dimension Reduction and Principled Training on Hyperdimensional Computing Models
CN117473315A (en) Graph classification model construction method and graph classification method based on multi-layer perceptron
CN115587616A (en) Network model training method and device, storage medium and computer equipment
Tuba et al. Modified seeker optimization algorithm for image segmentation by multilevel thresholding
CN114677545A (en) Lightweight image classification method based on similarity pruning and efficient module
CN112308197B (en) Compression method and device of convolutional neural network and electronic equipment
CN113807370A (en) Data processing method, device, equipment, storage medium and computer program product
CN113590720A (en) Data classification method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination