CN112487187A - News text classification method based on graph network pooling - Google Patents

News text classification method based on graph network pooling Download PDF

Info

Publication number
CN112487187A
CN112487187A CN202011386651.0A CN202011386651A CN112487187A CN 112487187 A CN112487187 A CN 112487187A CN 202011386651 A CN202011386651 A CN 202011386651A CN 112487187 A CN112487187 A CN 112487187A
Authority
CN
China
Prior art keywords
node
graph
score
cluster
pooling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011386651.0A
Other languages
Chinese (zh)
Other versions
CN112487187B (en
Inventor
朱小草
郭春生
陈华华
应娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202011386651.0A priority Critical patent/CN112487187B/en
Publication of CN112487187A publication Critical patent/CN112487187A/en
Application granted granted Critical
Publication of CN112487187B publication Critical patent/CN112487187B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/194Calculation of difference between files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Databases & Information Systems (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a news text classification method based on graph network pooling, which comprises the following steps: s1, combining structural information and characteristic information in an attention mechanism, and calculating a similarity score between nodes in a first-order neighborhood in a graph neural network to obtain the attention mechanism with similarity nodes; s2, thinning the obtained attention mechanism by adopting a sparse probability activation function sparsemax algorithm to obtain a cluster corresponding to the node; s3, calculating the score of each cluster by adopting local aggregation convolution, and judging the information content of the clusters according to the score; s4, selecting the front with the highest score by adopting topk
Figure DDA0002811171810000011
And repeating the selected clusters to obtain the final pooled neural network.

Description

News text classification method based on graph network pooling
Technical Field
The invention relates to the technical field of news text classification, in particular to a news text classification method based on graph network pooling.
Background
With the rapid development of the big data era, text data on the internet shows explosive growth, and it has very important significance to dig effective information from mass data. Because the news text has no fixed format, various types and high updating speed, the traditional manual classification has low efficiency and heavier subjective colors. The graph neural network is introduced into a news text classification, the news text is regarded as a graph, and nodes in the graph are composed of words. News text classification focuses mainly on the overall characteristics of the text, i.e. the study object is the entire graph itself. Graph neural networks are generally composed of convolutional and pooling layers, where research on graph convolution is very rich, with the main purpose of extracting features of the graph, but it is difficult for a model to learn information critical to graph representation and classification by stacking of graph convolutional layers alone. On one hand, learning parameters can be reduced through pooling, and on the other hand, different scale structures of the graph can be reflected.
Existing graph pooling methods include TopK, DiffPool, SAGPOOl, ASAP. Where TopK accomplishes pooling by adaptively selecting a subset of nodes that project all node features into 1 dimension using one learnable vector, then selecting the top k nodes with the largest scalar projection value. However, since the structure of the graph is not considered, the importance of the node is evaluated only from the aspect of the characteristics, and the method is too simple. DiifPool uses two graph neural networks to cluster and pool nodes, respectively, but is not suitable for large graphs because its soft allocation matrix is dense. And the SAPOol learns a scalar for each node through the structure and attribute information based on the attention mechanism, represents the importance of the corresponding node on the whole graph by the scalar, and sorts and pools the scalars. It neither aggregates node information nor computes soft-edge weights, and therefore node and edge information cannot be efficiently preserved. The ASAP improves the classification, but when the node information is aggregated, the node characteristics are easy to be too smooth, and the information is lost more, so that the news text classification effect is poor. At present, no pooling method is available for avoiding the problem of node characteristics being too smooth while preserving node information and side information in a graph.
Disclosure of Invention
The invention aims to provide a news text classification method based on graph network pooling aiming at the defects of the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme:
a news text classification method based on graph network pooling comprises the following steps:
s1, combining structural information and characteristic information in an attention mechanism, and calculating a similarity score between nodes in a first-order neighborhood in a graph neural network to obtain the attention mechanism with similarity nodes;
s2, thinning the obtained attention mechanism by adopting a sparse probability activation function sparsemax algorithm to obtain a cluster corresponding to the node;
s3, calculating the score of each cluster by adopting local aggregation convolution, and judging the information content of the clusters according to the score;
s4, selecting the front with the highest score by adopting topk
Figure BDA0002811171790000028
And repeating the selected clusters to obtain the final pooled neural network.
Further, the step S1 obtains an attention mechanism with similarity nodes, which is expressed as:
ei,j=σ(we[xi||xj]T)+λ·ai,j
where σ represents an activation function;
Figure BDA0002811171790000021
representing a weight vector;
Figure BDA0002811171790000022
and
Figure BDA0002811171790000023
characteristic vectors of the respective Oersted node i and the node j; | represents a splicing operation;
Figure BDA0002811171790000024
an adjacency matrix representing the current graph; a isi,jThe value of the ith row and the jth column of A is shown.
Further, after the sparse probability activation function sparsemax algorithm is adopted to perform the sparsification on the obtained attention mechanism in step S2, obtaining a sparse probability distribution, which is expressed as:
Figure BDA0002811171790000025
wherein s isiIs eiNormalized vector, eiIndicating the attention value of node i, ei=[ei,1,ei,2,...ei,N];
Figure BDA0002811171790000026
Representing a simplex with dimension N-1.
Further, the obtaining of the sparse probability distribution further includes:
defining a lagrange function:
Figure BDA0002811171790000027
optimal solution set
Figure BDA0002811171790000031
The following KKT conditions are satisfied:
Figure BDA0002811171790000032
Figure BDA0002811171790000033
Figure BDA0002811171790000034
if j belongs to { 1.,. N }, there is si,j *> 0, then mui,j *Is equal to 0 and has
Figure BDA0002811171790000035
Let c (e)i)={z∈{1,...,N}|si,z *> 0}, there are
Figure BDA0002811171790000036
Then
Figure BDA0002811171790000037
The dual form of the sparse probability distribution is represented as:
si,j=sparse max(ei,j)=[ei,j-τ(ei)]+
Figure BDA0002811171790000038
wherein, [ x ]]+Max {0, x }; τ (-) represents a threshold function.
Further, in step S3, the local aggregate convolution is used to calculate the local aggregate convolution in the score of each cluster, which is expressed as:
Figure BDA0002811171790000039
wherein sigmoid is adopted as the activation function sigma;
Figure BDA00028111717900000310
respectively representing the neighborhoods of the node i and the node j;
Figure BDA00028111717900000311
can representLearning parameters, wherein the global importance and the local importance of the clusters are considered, and the score of each cluster is obtained comprehensively;
Figure BDA00028111717900000312
representing a feature transformation of an ith node;
Figure BDA00028111717900000313
representing the difference between the current node and its first-order neighborhood.
Further, step S4 is specifically that:
the fitness vector phi is changed to [ phi12,...,φN]TMultiplication with the cluster representation matrix SX to make the fitness function fφCan learn:
Figure BDA00028111717900000314
wherein
Figure BDA00028111717900000315
Representing a Hadamard product; s ═ S1,s2,...,sN]Keep away from the dependent cluster allocation matrix; x ═ X1,x2,...,xN]TRepresenting a feature matrix;
function TOPkSorting the fitness scores and screening the retention by a ratio k to yield GcMiddle front
Figure BDA00028111717900000411
Indexing of selected clusters
Figure BDA0002811171790000041
Expressed as:
Figure BDA0002811171790000042
before selection
Figure BDA0002811171790000043
Clustering to form a pooling map GpThen the corresponding distribution matrix
Figure BDA0002811171790000044
And node feature matrix
Figure BDA0002811171790000045
Expressed as:
Figure BDA0002811171790000046
after sampling the clusters, pooling the graph GpFor middle use
Figure BDA0002811171790000047
And
Figure BDA0002811171790000048
to obtain a new adjacency matrix ap
Figure BDA0002811171790000049
Wherein the content of the first and second substances,
Figure BDA00028111717900000410
i denotes an identity matrix.
Compared with the prior art, the invention has the beneficial effects that:
1. an attention mechanism is used, the structural information of the graph is combined with the characteristic information of the nodes, and the similarity between the nodes can be calculated more accurately.
2. The Sparsemax algorithm is used for sparsifying the attention value of the first-order neighborhood, so that the nodes with high similarity form a cluster, and a new method is provided for cluster formation.
3. The local aggregation convolution and the topk algorithm are combined, so that the problem of node feature over-smoothness is effectively solved, the self-adaptive pooling operation is realized, and the accuracy rate is higher than that of the traditional graph pooling method.
Drawings
Fig. 1 is a flowchart of a news text classification method based on graph network pooling according to an embodiment.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
The invention aims to provide a news text classification method based on graph network pooling aiming at the defects of the prior art.
Example one
The embodiment provides a news text classification method based on graph network pooling, as shown in fig. 1, including the steps of:
s1, combining structural information and characteristic information in an attention mechanism, and calculating a similarity score between nodes in a first-order neighborhood in a graph neural network to obtain the attention mechanism with similarity nodes;
s2, thinning the obtained attention mechanism by adopting a sparse probability activation function sparsemax algorithm to obtain a cluster corresponding to the node;
s3, calculating the score of each cluster by adopting local aggregation convolution, and judging the information content of the clusters according to the score;
s4, selecting the front with the highest score by adopting topk
Figure BDA0002811171790000051
And repeating the selected clusters to obtain the final pooled neural network.
The embodiment specifically includes: firstly, in the graph, nodes with high similarity are adaptively selected to form a cluster by utilizing sparse attention. Structural information weight is added in the attention mechanism, so that the structural information weight is combined with the node characteristics, the structure learning is facilitated, the similarity score between the nodes in the first-order neighborhood of the structural information weight is calculated, the attention value is thinned by using Sparsemax to obtain a distribution matrix, and finally the node composition of each cluster is obtained. And then, aggregating nodes by utilizing a local aggregation convolution function to obtain cluster representation, calculating the information content of each cluster, selecting the cluster with a higher score by using TopK, and then recalculating the adjacency matrix of the graph by using the allocation matrix to obtain the feature matrix and the adjacency matrix of the graph after final pooling.
In step S1, the structural information and the feature information are combined in the attention mechanism, and the similarity score between the nodes in the first-order neighborhood in the graph neural network is calculated, so as to obtain the attention mechanism with the similarity nodes.
Similarity scores between nodes within a first-order neighborhood in the graph are calculated using an attention mechanism that combines structural information with feature information.
In order to make the node similarity in the cluster high, calculation in a first order range is selected. And calculating the similarity between the nodes in each first-order neighborhood through an attention mechanism so as to find out which node information should be concerned in the current neighborhood. In addition, in order to maintain the structure of the graph, the structure of the graph is also taken into consideration, so similar attention between the node i and the node j is as follows:
ei,j=σ(we[xi||xj]T)+λ·ai,j
where a is the activation function and where a is the activation function,
Figure BDA0002811171790000052
in order to be a weight vector, the weight vector,
Figure BDA0002811171790000053
and
Figure BDA0002811171790000054
are respectivelyAnd the feature vectors of the node i and the node j, | | is splicing operation.
Figure BDA0002811171790000055
The adjacency matrix representing the current graph, ai,jIs the value of row i and column j of A. When node i and node j are directly connected, ai,j0; when two nodes are not directly connected, ai,j≠0。
In step S2, the sparse probability activation function sparsemax algorithm is used to perform sparsification on the obtained attention mechanism, so as to obtain a cluster corresponding to the node.
And (3) thinning the obtained attention by adopting a sparsemax (sparse probability activation function) algorithm, namely directly assigning 0 to the small similarity of the node features.
Let siIs eiNormalized vector, wherein eiAttention value for node i, i.e. ei=[ei,1,ei,2,...ei,N]。
Figure BDA0002811171790000061
Representing a simplex with dimension N-1. Sparsemax is used herein, and is characterized by a probability distribution that produces sparseness:
Figure BDA0002811171790000062
sparsemax directly outputs eiThe projection to the simplex mode can play a role in output sparseness. The functional form described above cannot be solved directly without knowing the true distribution. Thus the lagrangian function is first defined:
Figure BDA0002811171790000063
optimal solution set
Figure BDA0002811171790000064
The following KKT conditions are satisfied:
Figure BDA0002811171790000065
Figure BDA00028111717900000611
Figure BDA0002811171790000066
if j belongs to { 1.,. N }, there is si,j *> 0, then mui,j *Is equal to 0 and has
Figure BDA0002811171790000067
Let c (e)i)={z∈{1,...,N}|si,z *> 0}, there are
Figure BDA0002811171790000068
Then
Figure BDA0002811171790000069
In combination, the dual form is
si,j=sparsemax(ei,j)=[ei,j-τ(ei)]+
Figure BDA00028111717900000610
Wherein [ x ]]+Max {0, x }, τ (·) is a threshold function, sparsemax (·) retains values above the threshold, and is set to zero when less than the threshold.
In step S3, the score of each cluster is calculated using local aggregation convolution, and the amount of information contained in the cluster is determined by the level of the score.
Calculating the score of each cluster by using local aggregation convolution, and judging the information content in the cluster according to the score;
using a fitness function fφScore phi according to cluster fitnessiThe clusters are sampled. In order to calculate the amount of information contained in the cluster, an aggregation mode is adopted for the cluster representation, and local information of the cluster representation is calculated, namely local aggregation convolution:
Figure BDA0002811171790000071
wherein the activation function sigma is taken to be sigmoid,
Figure BDA0002811171790000072
respectively representing the neighborhoods of node i and node j,
Figure BDA0002811171790000073
are learnable parameters. The global and local importance of the clusters are considered at the same time, and the score of each cluster can be obtained comprehensively. The previous item
Figure BDA0002811171790000074
For feature transformation of the ith node, the latter term
Figure BDA0002811171790000075
The current node representation is different from the node representation in the first-order neighborhood, and if the neighboring node can well represent the node, the score of the latter item is low, that is, the node is discarded, so that the influence on the whole is small. Because the previous attention thinning does not contain information of each first-order neighbor node, the difference between the current node and the neighbor nodes is increased, and therefore the nodes with more information content are screened out.
In step S4, top with the highest score is selected using topk
Figure BDA0002811171790000076
And repeating the selected clusters to obtain the final pooled neural network.
Selecting the top score using topk
Figure BDA0002811171790000077
Performing edge reconnection on the clusters to obtain a finally pooled graph;
the fitness vector phi is changed to [ phi12,...,φN]TMultiplication with the cluster representation matrix SX to make the fitness function fφCan learn:
Figure BDA0002811171790000078
wherein
Figure BDA0002811171790000079
Is a product of Hadamard, S ═ S1,s2,...,sN]Allocating a matrix to a cluster, X ═ X1,x2,...,xN]TIs a feature matrix. Function TOPkSorting the fitness scores and screening the retention by a ratio k to yield GcMiddle front
Figure BDA00028111717900000710
Indexing of selected clusters
Figure BDA00028111717900000711
As follows:
Figure BDA00028111717900000712
by selecting these precursors
Figure BDA0002811171790000081
Clustering to form a pooling map Gp. Then the corresponding allocation matrix
Figure BDA0002811171790000082
And node feature matrix
Figure BDA0002811171790000083
Given by:
Figure BDA0002811171790000084
after sampling the clusters, pooling the graph GpFor middle use
Figure BDA0002811171790000085
And
Figure BDA0002811171790000086
to obtain a new adjacency matrix ap
Figure BDA0002811171790000087
Wherein
Figure BDA0002811171790000088
And I is an identity matrix. This formula ensures that graph G is populated if there are common nodes between the two clusters, or if there are nodes in the two clusters that are connected in the original graph GpThe middle clusters i and j are connected, so that the connectivity of the graph is enhanced, and the existence of isolated nodes is reduced.
Compared with the existing news text classification method, the method has the beneficial effects that:
1. an attention mechanism is used, the structural information of the graph is combined with the characteristic information of the nodes, and the similarity between the nodes can be calculated more accurately.
2. The Sparsemax algorithm is used for sparsifying the attention value of the first-order neighborhood, so that the nodes with high similarity form a cluster, and a new method is provided for cluster formation.
3. The local aggregation convolution and the topk algorithm are combined, so that the problem of node feature over-smoothness is effectively solved, the self-adaptive pooling operation is realized, and the accuracy rate is higher than that of the traditional graph pooling method.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (6)

1. A news text classification method based on graph network pooling is characterized by comprising the following steps:
s1, combining structural information and characteristic information in an attention mechanism, and calculating a similarity score between nodes in a first-order neighborhood in a graph neural network to obtain the attention mechanism with similarity nodes;
s2, thinning the obtained attention mechanism by adopting a sparse probability activation function sparsemax algorithm to obtain a cluster corresponding to the node;
s3, calculating the score of each cluster by adopting local aggregation convolution, and judging the information content of the clusters according to the score;
s4, selecting the front with the highest score by adopting topk
Figure FDA0002811171780000011
And repeating the selected clusters to obtain the final pooled neural network.
2. The method for classifying news texts based on graph network pooling of claim 1, wherein the attention mechanism with the similarity node obtained in step S1 is represented as:
ei,j=σ(we[xi||xj]T)+λ·ai,j
where σ represents an activation function;
Figure FDA0002811171780000012
representing a weight vector;
Figure FDA0002811171780000013
and
Figure FDA0002811171780000014
characteristic vectors of the respective Oersted node i and the node j; | represents a splicing operation;
Figure FDA0002811171780000015
an adjacency matrix representing the current graph; a isi,jThe value of the ith row and the jth column of A is shown.
3. The method for classifying news texts based on graph network pooling according to claim 2, wherein the step S2 of thinning the obtained attention mechanism by using sparse probability activation function sparsemax algorithm further comprises obtaining sparse probability distribution, which is expressed as:
Figure FDA0002811171780000016
wherein s isiIs eiNormalized vector, eiIndicating the attention value of node i, ei=[ei,1,ei,2,...ei,N];
Figure FDA0002811171780000017
Representing a simplex with dimension N-1.
4. The method of claim 3, wherein the obtaining of the sparse probability distribution further comprises:
defining a lagrange function:
Figure FDA0002811171780000018
optimal solution set
Figure FDA0002811171780000019
The following KKT conditions are satisfied:
Figure FDA0002811171780000021
Figure FDA0002811171780000022
Figure FDA0002811171780000023
if j belongs to { 1.,. N }, there is si,j *> 0, then mui,j *Is equal to 0 and has
Figure FDA0002811171780000024
Let c (e)i)={z∈{1,...,N}|si,z *> 0}, there are
Figure FDA0002811171780000025
Then
Figure FDA0002811171780000026
The dual form of the sparse probability distribution is represented as:
si,j=sparsemax(ei,j)=[ei,j-τ(ei)]+
Figure FDA0002811171780000027
wherein, [ x ]]+Max {0, x }; τ (-) represents a threshold function.
5. The method for classifying news texts based on graph network pooling according to claim 4, wherein the local aggregate convolution is used in step S3 to calculate the local aggregate convolution expression in the score of each cluster as:
Figure FDA0002811171780000028
wherein sigmoid is adopted as the activation function sigma;
Figure FDA0002811171780000029
respectively representing the neighborhoods of the node i and the node j;
Figure FDA00028111717800000210
the learnable parameters are expressed, the global and local importance of the clusters is considered, and the score of each cluster is obtained comprehensively;
Figure FDA00028111717800000211
representing a feature transformation of an ith node;
Figure FDA00028111717800000212
representing the difference between the current node and its first-order neighborhood.
6. The method for classifying news texts based on graph network pooling according to claim 5, wherein the step S4 specifically comprises:
the fitness vector phi is changed to [ phi12,...,φN]TMultiplication with the cluster representation matrix SX to make the fitness function fφCan learn:
Figure FDA00028111717800000213
wherein |, indicates a hadamard product; s ═ S1,s2,...,sN]Keep away from the dependent cluster allocation matrix; x ═ X1,x2,...,xN]TRepresenting a feature matrix;
function TOPkSorting the fitness scores and screening the retention by a ratio k to yield GcMiddle front
Figure FDA0002811171780000031
Indexing of selected clusters
Figure FDA0002811171780000032
Expressed as:
Figure FDA0002811171780000033
before selection
Figure FDA0002811171780000034
Clustering to form a pooling map GpThen the corresponding distribution matrix
Figure FDA0002811171780000035
And node feature matrix
Figure FDA0002811171780000036
Expressed as:
Figure FDA0002811171780000037
after sampling the clusters, pooling the graph GpFor middle use
Figure FDA0002811171780000038
And
Figure FDA0002811171780000039
to obtain a new adjacency matrix ap
Figure FDA00028111717800000310
Wherein the content of the first and second substances,
Figure FDA00028111717800000311
i denotes an identity matrix.
CN202011386651.0A 2020-12-02 2020-12-02 News text classification method based on graph network pooling Active CN112487187B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011386651.0A CN112487187B (en) 2020-12-02 2020-12-02 News text classification method based on graph network pooling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011386651.0A CN112487187B (en) 2020-12-02 2020-12-02 News text classification method based on graph network pooling

Publications (2)

Publication Number Publication Date
CN112487187A true CN112487187A (en) 2021-03-12
CN112487187B CN112487187B (en) 2022-06-10

Family

ID=74938664

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011386651.0A Active CN112487187B (en) 2020-12-02 2020-12-02 News text classification method based on graph network pooling

Country Status (1)

Country Link
CN (1) CN112487187B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170083747A1 (en) * 2015-09-21 2017-03-23 The Climate Corporation Ponding water detection on satellite imagery
CN109389055A (en) * 2018-09-21 2019-02-26 西安电子科技大学 Video classification methods based on mixing convolution sum attention mechanism
US20200250139A1 (en) * 2018-12-31 2020-08-06 Dathena Science Pte Ltd Methods, personal data analysis system for sensitive personal information detection, linking and purposes of personal data usage prediction
CN111563164A (en) * 2020-05-07 2020-08-21 成都信息工程大学 Specific target emotion classification method based on graph neural network
CN111709518A (en) * 2020-06-16 2020-09-25 重庆大学 Method for enhancing network representation learning based on community perception and relationship attention
CN111985369A (en) * 2020-08-07 2020-11-24 西北工业大学 Course field multi-modal document classification method based on cross-modal attention convolution neural network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170083747A1 (en) * 2015-09-21 2017-03-23 The Climate Corporation Ponding water detection on satellite imagery
CN109389055A (en) * 2018-09-21 2019-02-26 西安电子科技大学 Video classification methods based on mixing convolution sum attention mechanism
US20200250139A1 (en) * 2018-12-31 2020-08-06 Dathena Science Pte Ltd Methods, personal data analysis system for sensitive personal information detection, linking and purposes of personal data usage prediction
CN111563164A (en) * 2020-05-07 2020-08-21 成都信息工程大学 Specific target emotion classification method based on graph neural network
CN111709518A (en) * 2020-06-16 2020-09-25 重庆大学 Method for enhancing network representation learning based on community perception and relationship attention
CN111985369A (en) * 2020-08-07 2020-11-24 西北工业大学 Course field multi-modal document classification method based on cross-modal attention convolution neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHEN ZHANG等: "Hierarchical graph pooling with structure learning", 《ARXIV:1911.05954》 *
尹一君: "基于轻量级网络的高通量遥感视觉目标检测与识别技术研究", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》 *

Also Published As

Publication number Publication date
CN112487187B (en) 2022-06-10

Similar Documents

Publication Publication Date Title
CN109271522B (en) Comment emotion classification method and system based on deep hybrid model transfer learning
CN108319987B (en) Filtering-packaging type combined flow characteristic selection method based on support vector machine
CN110598061A (en) Multi-element graph fused heterogeneous information network embedding method
CN113868366B (en) Streaming data-oriented online cross-modal retrieval method and system
CN112100514B (en) Friend recommendation method based on global attention mechanism representation learning
CN113378913A (en) Semi-supervised node classification method based on self-supervised learning
CN111738303A (en) Long-tail distribution image identification method based on hierarchical learning
CN110674865A (en) Rule learning classifier integration method oriented to software defect class distribution unbalance
CN110598848A (en) Migration learning acceleration method based on channel pruning
CN114299362A (en) Small sample image classification method based on k-means clustering
CN114117945B (en) Deep learning cloud service QoS prediction method based on user-service interaction graph
CN113283473A (en) Rapid underwater target identification method based on CNN feature mapping pruning
CN108614932B (en) Edge graph-based linear flow overlapping community discovery method, system and storage medium
CN113887698A (en) Overall knowledge distillation method and system based on graph neural network
CN111161282B (en) Target scale selection method for image multi-level segmentation based on depth seeds
CN112487187B (en) News text classification method based on graph network pooling
CN113255892A (en) Method and device for searching decoupled network structure and readable storage medium
Zhan et al. Field programmable gate array‐based all‐layer accelerator with quantization neural networks for sustainable cyber‐physical systems
CN111639751A (en) Non-zero padding training method for binary convolutional neural network
CN111291193A (en) Application method of knowledge graph in zero-time learning
CN110674333A (en) Large-scale image high-speed retrieval method based on multi-view enhanced depth hashing
CN113592013B (en) Three-dimensional point cloud classification method based on graph attention network
CN112836511B (en) Knowledge graph context embedding method based on cooperative relationship
CN112990336B (en) Deep three-dimensional point cloud classification network construction method based on competitive attention fusion
CN108898227A (en) Learning rate calculation method and device, disaggregated model calculation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant