CN112613559B - Mutual learning-based graph convolution neural network node classification method, storage medium and terminal - Google Patents

Mutual learning-based graph convolution neural network node classification method, storage medium and terminal Download PDF

Info

Publication number
CN112613559B
CN112613559B CN202011540958.1A CN202011540958A CN112613559B CN 112613559 B CN112613559 B CN 112613559B CN 202011540958 A CN202011540958 A CN 202011540958A CN 112613559 B CN112613559 B CN 112613559B
Authority
CN
China
Prior art keywords
network
student
node
mutual learning
loss function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011540958.1A
Other languages
Chinese (zh)
Other versions
CN112613559A (en
Inventor
匡平
高宇
李凡
彭江艳
黄泓毓
段其鹏
刘晨阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202011540958.1A priority Critical patent/CN112613559B/en
Publication of CN112613559A publication Critical patent/CN112613559A/en
Application granted granted Critical
Publication of CN112613559B publication Critical patent/CN112613559B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a graph convolution neural network node classification method based on mutual learning, a storage medium and a terminal, wherein the method comprises the following steps: inputting the data to be classified into the trained mutual learning network to obtain a node classification result; the mutual learning network comprises the following substeps of training: using a loss function L k Training a mutual learning network comprising three or more student networks; the loss function L k Including self-cross entropy loss function

Description

Mutual learning-based graph convolution neural network node classification method, storage medium and terminal
Technical Field
The invention relates to the field of neural networks, in particular to a mutual learning-based graph convolution neural network node classification method, a storage medium and a terminal.
Background
The graph data has an irregular and changeable data structure and is used for representing various objects and the mutual relations among the objects, and the graph data can be applied to various fields such as social networks, traffic networks, biochemical molecular networks and the like. Graph data representation has become an increasingly popular area of research. The graph neural network is the most popular graph representation method based on the deep learning technology, and particularly, the graph convolution network method populates convolution operation from traditional image data to graph data, and has shown remarkable learning capability.
Mutual learning does not require a strong 'teacher' network before, only a group of student networks are needed for effective training, and the result is stronger than a network guided by 'teachers', but the number of the existing mutual learning networks is usually two, but the data is inaccurate by adopting the method.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a graph convolution neural network node classification method based on mutual learning, a storage medium and a terminal.
The purpose of the invention is realized by the following technical scheme:
the invention provides a graph convolution neural network node classification method based on mutual learning, which comprises the following steps:
inputting data to be classified into the trained mutual learning network to obtain a node classification result; the mutual learning network comprises the following substeps of training:
using a loss function L k Training a mutual learning network comprising three or more student networks(ii) a The loss function L k Including self-cross entropy loss function
Figure BDA0002854551640000012
And a KL loss function; the loss function includes:
Figure BDA0002854551640000011
wherein N represents the number of student networks, k represents the kth student network, l represents a student network other than the kth student network, D KL (p l ||p k ) Indicating KL loss, p, of the kth student network relative to the l student network l Representing the predicted output of the ith student network, p k Representing the predicted output of the kth student network; kappa is used to control the weight of KL loss, kappa < 1.
Further, the KL loss function is specifically:
Figure BDA0002854551640000021
in the formula, M represents the number of categories in the label in the student network input,
Figure BDA0002854551640000022
representing a student network l prediction node x i The probability of belonging to class m, i.e. the output of the network.
Further, the self-cross entropy loss function
Figure BDA0002854551640000023
The method comprises the following specific steps:
Figure BDA0002854551640000024
in the formula, M represents the number of categories in the label in the student network input,
Figure BDA0002854551640000025
representing student network k prediction node x i Probability of belonging to class m, y i A signal function representing the predicted value of the class of the ith node
Figure BDA0002854551640000026
Further, the input of the student network includes P, X, and label; x represents the characteristics of the node, and label represents the category to which the node belongs;
wherein, a matrix P with dimension of Q multiplied by Q is constructed, and the j value m of the ith row in P ij Showing the connection state of the ith and the j nodes in the diagram, and if the two nodes are connected by edges, m is ij Is 1, m if two nodes have no edge connection ij Is 0, wherein i =1,2, 8230, N, j =1,2, 8230, N;
the student network comprises s layers of GCN student networks which are sequentially connected, wherein the output of the r +1 th layer of GCN student network is
Figure BDA0002854551640000027
Where σ is the activation function->
Figure BDA0002854551640000028
Is a matrix calculated by P->
Figure BDA0002854551640000029
Where D is the degree matrix of P, W (r) Representing a weight matrix of the r-th layer; h (0) Namely, the initial input X;
the output of the student network is H (s)
Further, the inputting the data to be classified into the trained mutual learning network to obtain the node classification result includes a voting mechanism or a random mechanism:
the voting mechanism comprises: taking each student network in the mutual learning network as an independent classifier, inputting each data to be classified into each student network to obtain a plurality of prediction outputs, and outputting a final node classification result in a minority-compliant and majority-compliant form;
the random mechanism comprises: one student network is randomly adopted as a classifier, and data to be classified is input into the student network to obtain a node classification result.
Further, when the size of the model or the number of parameters is limited, a random mechanism is adopted; when model accuracy is required without limitation to model size or number of parameters, a voting mechanism is used.
Further, the three or more student networks are different in structure.
Further, the structure difference is that the number of hidden layer neurons is different.
In a second aspect of the present invention, a storage medium is provided, on which computer instructions are stored, which computer instructions, when executed, perform the steps of the mutual learning based atlas neural network node classification method.
In a third aspect of the present invention, a terminal is provided, which includes a memory and a processor, where the memory stores computer instructions executable on the processor, and the processor executes the computer instructions to perform the steps of the mutual learning-based convolutional neural network node classification method.
The invention has the beneficial effects that:
(1) In an exemplary embodiment of the invention, the student networks with the number larger than 2 are adopted, so that the overfitting situation can be reduced, and the model accuracy is improved; compared with the student networks of the number 2, the model is stronger in robustness. Specifically, the method of mutual learning is equivalent to adding a regularization term on the loss function; and the plurality of student networks are beneficial to jumping out of the local optimal solution in the optimization process and approaching to the global optimal solution, so that the number of the sub-networks is increased in a certain range, and the convergence of the networks is facilitated. Meanwhile, in order to avoid the problem that when the number of the student networks is large, the KL loss function of a small number of other student networks and the student network is possibly overlarge, the influence coefficient is too high, and the influence of the cross entropy loss of the student network on the student network is weakened, so that the weight coefficient kappa of the KL loss function is less than 1.
(2) In a further exemplary embodiment of the present invention, a voting mechanism or a random mechanism is employed to achieve the result classification, specifically: the voting mechanism comprises: taking each student network in the mutual learning network as an independent classifier, inputting each data to be classified into each student network to obtain a plurality of prediction outputs, and outputting a final node classification result in a minority-compliant and majority-compliant form; the random mechanism comprises: one student network is randomly adopted as a classifier, and data to be classified is input into the student network to obtain a node classification result.
(3) In yet another exemplary embodiment of the present invention, the use of networks of different structures as subnetworks is more efficient and accurate. Especially for the number of hidden layer neurons.
Drawings
FIG. 1 is a flow chart of a method provided by an exemplary embodiment of the present invention;
FIG. 2 is a schematic diagram of a mutual learning network structure according to an exemplary embodiment of the present invention;
fig. 3 is a schematic diagram of an internal structure of a student network according to an exemplary embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if," as used herein, may be interpreted as "at \8230; \8230when" or "when 8230; \823030when" or "in response to a determination," depending on the context.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Referring to fig. 1, fig. 1 is a flowchart illustrating a mutual learning-based graph convolution neural network node classification method according to an exemplary embodiment of the present invention, including the following steps:
inputting the data to be classified into the trained mutual learning network to obtain a node classification result; the mutual learning network comprises the following substeps of training:
using a loss function L k Training a mutual learning network comprising three or more student networks, as shown in fig. 2; the loss function L k Including self-cross entropy loss function
Figure BDA0002854551640000041
And a KL loss function; the loss function includes: />
Figure BDA0002854551640000042
Wherein N represents the number of student networks, k represents the kth student network, l represents a student network other than the kth student network, D KL (p l ||p k ) Indicating KL loss, p, of the kth student network relative to the l student network l Prediction representing the ith student networkOutput, p k Representing the predicted output of the kth student network; kappa is used to control the weight of KL loss, kappa < 1.
In the exemplary embodiment, the number of the student networks is more than 2, so that the method has the advantages that the overfitting condition can be reduced, and the accuracy of the model is improved; compared with the student networks of the number 2, the model is stronger in robustness. Specifically, the method of mutual learning is equivalent to adding a regularization term to the loss function (i.e. the whole KL loss function is regarded as a regularization term); and the plurality of student networks are beneficial to jumping out of the local optimal solution in the optimization process and approaching to the global optimal solution, so that the number of the sub-networks is increased in a certain range, and the convergence of the networks is facilitated.
In addition, for the weight κ of KL loss, when the number of student networks is large, there may be a problem that KL loss functions of a few other student networks and the student network itself are too large, so that the influence coefficient is too high, and the influence of cross entropy loss itself on itself is weakened. Therefore, in this exemplary embodiment, to avoid this problem, κ < 1 may be set to 0.5, for example, when the number of student networks reaches 7 or more.
More preferably, in an exemplary embodiment, the KL loss function is specifically:
Figure BDA0002854551640000051
wherein M represents the number of categories in the label in the student network input,
Figure BDA0002854551640000052
representing a student network l prediction node x i The probability of belonging to class m, i.e. the output of the network.
Wherein the meaning of a category is the category to which the node belongs. Specifically, the category here is category information included in a data set, such as a citation network, each node represents a paper, the category of the node is a category to which the paper belongs, eg: machine learning, network optimization, and the like.
Preferably, in an exemplary embodiment, the self-cross entropy loss function
Figure BDA0002854551640000053
The method comprises the following specific steps:
Figure BDA0002854551640000054
in the formula, M represents the number of categories in the label in the student network input,
Figure BDA0002854551640000055
representing student network k prediction node x i Probability of belonging to class m, y i A signal function representing the predicted value of the class of the ith node
Figure BDA0002854551640000056
More preferably, in an exemplary embodiment, the input to the student network includes P, X, and label; x represents the characteristics of the node, and label represents the category to which the node belongs;
wherein, a matrix P with dimension of Q multiplied by Q is constructed, and the j value m of the ith row in P ij Showing the connection state of the ith and j nodes in the graph, and if the two nodes are connected by edges, m is ij Is 1, m if two nodes have no edge connection ij Is 0, wherein i =1,2, 8230, N, j =1,2, 8230, N;
the student network comprises s layers of GCN student networks which are connected in sequence, as shown in figure 3, wherein the output of the r +1 th layer of GCN student network is
Figure BDA0002854551640000057
In which σ is an activation function>
Figure BDA0002854551640000058
Is a matrix calculated by P->
Figure BDA0002854551640000059
Where D is the degree matrix of P, W (r) Representing a weight matrix of the r-th layer; h (0) Namely, the initial input X;
the output of the student network is H (s)
Specifically, taking a citation network as an example, each paper on the citation network is regarded as a node in the graph, and the citation relationship between the papers can be regarded as an edge, that is, paper i refers to paper j or paper j refers to paper i, and an edge exists between node i and node j. Thus, defining P as the adjacency matrix of the graph, if there is an edge between node i and node j, then m ij Has a value of 1; two nodes without edge connection correspond to a value of 0. Each paper has its own word vector, which is taken as the characteristic of the node itself and marked as X. Each paper also has its own category, denoted label. The inputs to the entire network are denoted as P, X and label.
And the student network may consist of a layer 1 GCN student network, a layer 2 GCN student network, or a layer 3 GCN student network. For example, take a two-layer GCN student network as an example: first layer state
Figure BDA0002854551640000061
Wherein X is the characteristic of the node itself; second layer state
Figure BDA0002854551640000062
Wherein H (2) I.e. the output of the second layer, i.e. the output of the entire GCN student network. Meanwhile, σ is an activation function, and a Relu function may be used.
Preferably, in an exemplary embodiment, the inputting the data to be classified into the trained mutual learning network, and the obtaining the node classification result includes a voting mechanism or a random mechanism:
the voting mechanism comprises: taking each student network in the mutual learning network as an independent classifier, inputting each data to be classified into each student network to obtain a plurality of prediction outputs, and outputting a final node classification result in a minority-compliant and majority-compliant form;
the random mechanism comprises: one of the student networks is randomly adopted as a classifier, and data to be classified is input into the student network to obtain a node classification result.
Specifically, in this exemplary embodiment, the training of the multiple networks may be completed, each network may be regarded as a separate classifier (random mechanism), or a voting mechanism may be used (for example, for an unknown node, multiple networks have different prediction results, and a majority obeys a minority policy is used).
Preferably, in an exemplary embodiment, when there is a limit to the size of the model or the number of parameters, a random mechanism is adopted; when model accuracy is required without limitation to model size or number of parameters, a voting mechanism is used.
Preferably, in an exemplary embodiment, the three or more student networks are structurally different. In particular, it is more effective to use networks of different structures as the sub-networks.
Preferably, in an exemplary embodiment, the structural differences are a difference in the number of hidden layer neurons.
For example, the number of the student networks is 3, the GCN with the number of hidden layer neurons 18, 36, 54 is used, the trained result, the accuracy is higher than that of the network trained by respectively training the hidden layer neurons 18, 36, 54.
Based on any one of the above exemplary embodiments, a further exemplary embodiment of the present invention provides a storage medium having stored thereon computer instructions which, when executed, perform the steps of the mutual learning based graph convolution neural network node classification method.
Based on any one of the above exemplary embodiments, a further exemplary embodiment of the present invention provides a terminal, which includes a memory and a processor, where the memory stores computer instructions executable on the processor, and the processor executes the computer instructions to perform the steps of the mutual learning based graph convolution neural network node classification method.
Based on such understanding, the technical solutions of the present embodiments may be essentially implemented or make a contribution to the prior art, or may be implemented in the form of a software product stored in a storage medium and including several instructions for causing an apparatus to execute all or part of the steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
It is to be understood that the above-described embodiments are illustrative only and not restrictive of the broad invention, and that various other modifications and changes in light thereof will be apparent to those skilled in the art from this disclosure. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications of the invention may be made without departing from the spirit or scope of the invention.

Claims (8)

1. The graph convolution neural network node classification method based on mutual learning is characterized by comprising the following steps of: the method comprises the following steps:
inputting data to be classified into the trained mutual learning network to obtain a node classification result; the mutual learning network comprises the following substeps of training:
using a loss function L k Training a mutual learning network comprising three or more student networks; the loss function L k Including self-cross entropy loss function
Figure FDA0004051886830000011
And a KL loss function; the loss function includes:
Figure FDA0004051886830000012
wherein N represents the number of student networks, k represents the kth student network,l denotes a student network other than the kth student network, D KL (p l ||p k ) Indicating KL loss, p, of the kth student network relative to the l student network l Representing the predicted output of the ith student network, p k Representing the predicted output of the kth student network; kappa is used for controlling the weight of KL loss, and kappa is less than 1;
the KL loss function is specifically:
Figure FDA0004051886830000013
wherein M represents the number of categories in the label in the student network input,
Figure FDA0004051886830000014
representing a student network l prediction node x i The probability of belonging to the class m, i.e. the output of the network, is>
Figure FDA0004051886830000015
Representing student network k prediction node x i A probability of belonging to class m;
the category is category information contained in the data set, and for the citation network, each node represents a paper, and the category of the node is the category to which the paper belongs;
the input of the student network comprises P, X and label; x represents the characteristics of the node, and label represents the category to which the node belongs;
wherein, a matrix P with dimension of Q multiplied by Q is constructed, and the value m of the ith row and the jth column in the P ij Showing the connection state of the ith and the j nodes in the diagram, and if the two nodes are connected by edges, m is ij Is 1, m if two nodes have no edge connection ij Is 0, wherein i =1,2, 8230, N, j =1,2, 8230, N;
the student network comprises s layers of GCN student networks which are sequentially connected, wherein the output of the r +1 th layer of GCN student network is
Figure FDA0004051886830000016
Where σ is the activation function->
Figure FDA0004051886830000017
Is a matrix calculated by P +>
Figure FDA0004051886830000018
Where D is the degree matrix of P, W (r) Representing a weight matrix of the r-th layer; h (0) Namely the initial input X;
the output of the student network is H (s)
Each paper on the citation network is taken as a node in the graph, so that the citation relationship between the papers can be regarded as an edge, namely, the paper i cites the paper j or the paper j cites the paper i, and an edge exists between the node i and the node j; each thesis has a word vector of the thesis, and the word vector is used as the characteristic of a node and is marked as X; each paper also has its own category, denoted label; the inputs to the entire network are denoted as P, X and label.
2. The mutual learning based graph convolution neural network node classification method of claim 1, characterized in that: the self cross entropy loss function
Figure FDA0004051886830000021
The method comprises the following specific steps:
Figure FDA0004051886830000022
where M represents the number of categories in the label in the student's web input, y i A signal function representing the predicted value of the class of the ith node
Figure FDA0004051886830000023
3. The mutual learning based graph convolution neural network node classification method of claim 1, characterized in that: the method for inputting the data to be classified into the trained mutual learning network to obtain the node classification result comprises a voting mechanism or a random mechanism:
the voting mechanism comprises: taking each student network in the mutual learning network as an independent classifier, inputting each data to be classified into each student network to obtain a plurality of prediction outputs, and outputting a final node classification result in a minority-compliant and majority-compliant form;
the random mechanism comprises: one of the student networks is randomly adopted as a classifier, and data to be classified is input into the student network to obtain a node classification result.
4. The mutual learning-based graph convolution neural network node classification method of claim 3, characterized in that: when the size of the model or the number of parameters is limited, a random mechanism is adopted; when model accuracy is required without limitation to model size or number of parameters, a voting mechanism is used.
5. The mutual learning based graph convolution neural network node classification method of claim 1, characterized in that: the three or more student networks have different structures.
6. The mutual learning-based graph convolution neural network node classification method of claim 5, characterized in that: the difference in structure is the number of hidden layer neurons.
7. A storage medium having computer instructions stored thereon, characterized in that: the computer instructions when executed perform the steps of the mutual learning-based convolutional neural network node classification method of any of claims 1-6.
8. A terminal comprising a memory and a processor, the memory having stored thereon computer instructions executable on the processor, wherein the processor, when executing the computer instructions, performs the steps of the mutual learning based atlas neural network node classification method of any of claims 1-6.
CN202011540958.1A 2020-12-23 2020-12-23 Mutual learning-based graph convolution neural network node classification method, storage medium and terminal Active CN112613559B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011540958.1A CN112613559B (en) 2020-12-23 2020-12-23 Mutual learning-based graph convolution neural network node classification method, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011540958.1A CN112613559B (en) 2020-12-23 2020-12-23 Mutual learning-based graph convolution neural network node classification method, storage medium and terminal

Publications (2)

Publication Number Publication Date
CN112613559A CN112613559A (en) 2021-04-06
CN112613559B true CN112613559B (en) 2023-04-07

Family

ID=75244523

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011540958.1A Active CN112613559B (en) 2020-12-23 2020-12-23 Mutual learning-based graph convolution neural network node classification method, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN112613559B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114461629A (en) * 2022-02-10 2022-05-10 电子科技大学 Temperature calibration method and device for aircraft engine and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109977223A (en) * 2019-03-06 2019-07-05 中南大学 A method of the figure convolutional network of fusion capsule mechanism classifies to paper
CN110688474A (en) * 2019-09-03 2020-01-14 西北工业大学 Embedded representation obtaining and citation recommending method based on deep learning and link prediction
CN110781933A (en) * 2019-10-14 2020-02-11 杭州电子科技大学 Visual analysis method for understanding graph convolution neural network
CN111079781A (en) * 2019-11-07 2020-04-28 华南理工大学 Lightweight convolutional neural network image identification method based on low rank and sparse decomposition
CN111464327A (en) * 2020-02-25 2020-07-28 电子科技大学 Spatial information network survivability evaluation method based on graph convolution network
CN111476368A (en) * 2020-04-10 2020-07-31 电子科技大学 Impulse neural network weight imaging comparison prediction and network anti-interference method
CN111553470A (en) * 2020-07-10 2020-08-18 成都数联铭品科技有限公司 Information interaction system and method suitable for federal learning
CN111767711A (en) * 2020-09-02 2020-10-13 之江实验室 Compression method and platform of pre-training language model based on knowledge distillation
WO2020249961A1 (en) * 2019-06-14 2020-12-17 Vision Semantics Limited Optimised machine learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190251480A1 (en) * 2018-02-09 2019-08-15 NEC Laboratories Europe GmbH Method and system for learning of classifier-independent node representations which carry class label information

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109977223A (en) * 2019-03-06 2019-07-05 中南大学 A method of the figure convolutional network of fusion capsule mechanism classifies to paper
WO2020249961A1 (en) * 2019-06-14 2020-12-17 Vision Semantics Limited Optimised machine learning
CN110688474A (en) * 2019-09-03 2020-01-14 西北工业大学 Embedded representation obtaining and citation recommending method based on deep learning and link prediction
CN110781933A (en) * 2019-10-14 2020-02-11 杭州电子科技大学 Visual analysis method for understanding graph convolution neural network
CN111079781A (en) * 2019-11-07 2020-04-28 华南理工大学 Lightweight convolutional neural network image identification method based on low rank and sparse decomposition
CN111464327A (en) * 2020-02-25 2020-07-28 电子科技大学 Spatial information network survivability evaluation method based on graph convolution network
CN111476368A (en) * 2020-04-10 2020-07-31 电子科技大学 Impulse neural network weight imaging comparison prediction and network anti-interference method
CN111553470A (en) * 2020-07-10 2020-08-18 成都数联铭品科技有限公司 Information interaction system and method suitable for federal learning
CN111767711A (en) * 2020-09-02 2020-10-13 之江实验室 Compression method and platform of pre-training language model based on knowledge distillation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Kuang Ping等.Research on central issues of crowd density estimation.2013 10th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP)》.2014,143-145. *
Thoudam Doren Singh.A Hybrid Classification Approach using Topic Modeling and Graph Convolution Networks.《2020 International Conference on Computational Performance Evaluation (ComPE)》.2020,285-289. *
张兴园等.基于草图纹理和形状特征融合的草图识别.《自动化学报》.2020,第48卷(第48期),2223-2232. *

Also Published As

Publication number Publication date
CN112613559A (en) 2021-04-06

Similar Documents

Publication Publication Date Title
Gu et al. Stack-captioning: Coarse-to-fine learning for image captioning
US11514305B1 (en) Intelligent control with hierarchical stacked neural networks
CN108027899B (en) Method for improving performance of trained machine learning model
CN112861936B (en) Graph node classification method and device based on graph neural network knowledge distillation
CN112925977A (en) Recommendation method based on self-supervision graph representation learning
CN111931505A (en) Cross-language entity alignment method based on subgraph embedding
CN112784929B (en) Small sample image classification method and device based on double-element group expansion
JP2022507255A (en) Computer architecture for artificial image generation with automatic encoders
JP2022547460A (en) Performing XNOR equivalent operations by adjusting column thresholds of compute-in-memory arrays
CN111291556A (en) Chinese entity relation extraction method based on character and word feature fusion of entity meaning item
CN113869404B (en) Self-adaptive graph roll accumulation method for paper network data
CN110363230A (en) Stacking integrated sewage handling failure diagnostic method based on weighting base classifier
WO2022036921A1 (en) Acquisition of target model
CN112613559B (en) Mutual learning-based graph convolution neural network node classification method, storage medium and terminal
Blot et al. Shade: Information-based regularization for deep learning
CN112749737A (en) Image classification method and device, electronic equipment and storage medium
CN115577283A (en) Entity classification method and device, electronic equipment and storage medium
CN116311880A (en) Traffic flow prediction method and equipment based on local-global space-time feature fusion
CN110808036B (en) Incremental voice command word recognition method
CN116992942A (en) Natural language model optimization method, device, natural language model, equipment and medium
Jammoussi et al. A hybrid method based on extreme learning machine and self organizing map for pattern classification
CN116779061A (en) Interactive drug molecule design method, device, electronic equipment and medium
Monner et al. Recurrent neural collective classification
CN115438658A (en) Entity recognition method, recognition model training method and related device
Azeez Joodi et al. A New Proposed Hybrid Learning Approach with Features for Extraction of Image Classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant