CN111275201A - Sub-graph division based distributed implementation method for semi-supervised learning of graph - Google Patents

Sub-graph division based distributed implementation method for semi-supervised learning of graph Download PDF

Info

Publication number
CN111275201A
CN111275201A CN202010068356.4A CN202010068356A CN111275201A CN 111275201 A CN111275201 A CN 111275201A CN 202010068356 A CN202010068356 A CN 202010068356A CN 111275201 A CN111275201 A CN 111275201A
Authority
CN
China
Prior art keywords
graph
node
formula
solution
subgraph
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010068356.4A
Other languages
Chinese (zh)
Inventor
蒋俊正
黄炟鑫
冯海荣
卢军志
池源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Original Assignee
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology filed Critical Guilin University of Electronic Technology
Priority to CN202010068356.4A priority Critical patent/CN111275201A/en
Publication of CN111275201A publication Critical patent/CN111275201A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Complex Calculations (AREA)

Abstract

The invention discloses a distributed implementation method of semi-supervised learning of a graph based on sub-graph division, which is characterized by comprising the following steps of: 1) constructing a graph; 2) modeling an optimization problem; 3) sub-graph division and optimization problem decomposition; 4) splicing the solution and the solution of the subproblem; 5) carrying out iterative solution; 6) and (4) distributed solving. The method has short calculation time and high acquisition speed of data used for calculation, and the calculation result obtained under large-scale data is consistent with a centralized result.

Description

Sub-graph division based distributed implementation method for semi-supervised learning of graph
Technical Field
The invention relates to the technical field of machine learning and graph signal processing, in particular to a distributed implementation method of graph semi-supervised learning based on sub-graph partitioning.
Background
The era of our modern times is a big data era, the acquisition and storage of data are simpler than ever, and under the huge data, how to extract valuable information from the data becomes more critical. Nowadays, machine learning algorithms are used for processing big data, and algorithms such as neural networks achieve certain results in practical application, however, at present, the training time of the algorithms is long, and the acquisition of data used for training is very difficult.
The semi-supervised learning of the graph is a relatively important part in a machine learning algorithm, and has unique advantages in comparison with other machine learning algorithms. First, the graph semi-supervised learning is a direct-push algorithm, which can directly calculate the result without training the model, and as a semi-supervised learning algorithm, the requirement on data is low, and only a small part of labeled known data is needed to label the rest of data. Based on this, it is necessary to improve the solution method of the current semi-supervised learning problem.
Disclosure of Invention
The invention aims to provide a distributed implementation method of graph semi-supervised learning based on sub-graph partitioning, aiming at the defects of the prior art. The method has short calculation time and high acquisition speed of data used for calculation, and the calculation result obtained under large-scale data is consistent with a centralized result.
The technical scheme for realizing the purpose of the invention is as follows:
a distributed implementation method of semi-supervised learning of a graph based on sub-graph partitioning comprises the following steps:
1) constructing a graph: the data set for semi-supervised learning is
Figure BDA0002376598660000011
There are a total of N samples, x, in the datasetnRepresenting the nth sample, the labels in the data set all come from
Figure BDA0002376598660000012
With a class c tag, where { x1,x2,...xlThe label information of is known, the corresponding label information is
Figure BDA0002376598660000013
And { xl+1,...,xnThe tag information of } is unknown, according to
Figure BDA0002376598660000014
The similarity of the samples in the drawing is established
Figure BDA0002376598660000015
Figure BDA0002376598660000016
And E are respectively a node set and an edge set,
Figure BDA0002376598660000017
each node in the E group corresponds to each sample in the data set, and the E group comprises the connection information of the nodes;
2) modeling an optimization problem: representing label information of a data set to be processed into a graph signal f ═ f1,…,fn]TThe signal value is a label of the corresponding sample, and the optimization problem of semi-supervised learning of the graph is defined as follows:
Figure BDA0002376598660000021
Figure BDA0002376598660000022
the method comprises the following steps that (1) information of each type of label is transmitted to a sample with unknown label information, then, a final classification result is extracted by a formula (2), F in the formula (1) is a classification matrix, Y is a known label information matrix which are both N multiplied by c, Y is generated through a data set F to be processed, and the generation mode is as follows:
Figure BDA0002376598660000023
in the formula (1), the first and second groups,
Figure BDA0002376598660000024
for the matching term, τ is the weighting factor, S (F):,j) Is set as a penalty item
Figure BDA0002376598660000025
Wherein L isnorm=I-D-1/2WD-1/2Normalizing the Laplace matrix for the graph, wherein I is a unit matrix, D is a degree matrix, and W is an adjacency matrix, and the propagation of the jth label information in the formula (1) is expressed as:
Figure BDA0002376598660000026
equation (4) can be expressed as:
Figure BDA0002376598660000027
f in formula (5)j=F:,j,yj=Y:,j
Figure BDA0002376598660000028
Is fjTransposing;
3) sub-graph division and optimization problem decomposition: using indicator operators
Figure BDA0002376598660000029
To the picture
Figure BDA00023765986600000210
Sub-graph partitioning to indicate operators
Figure BDA00023765986600000211
Is a diagonal matrix and is defined as:
Figure BDA00023765986600000212
b (k,2r) in equation (7) is a set of nodes that contains k and neighbor nodes within a radius of 2r, for node k,
Figure BDA00023765986600000213
dividing a subgraph taking k as a center and taking a node set as B (k,2r) for a node k
Figure BDA00023765986600000214
For a graph with N nodes, there are N such subgraphs, and the optimization problem as shown in equation (5) is projected onto each subgraph, for each node k and the corresponding subgraph
Figure BDA00023765986600000215
There is a corresponding optimization sub-problem as shown in equation (8):
Figure BDA00023765986600000216
f in formula (8)j,kIs f in the subfigure
Figure BDA00023765986600000217
The projection of (2) is performed,
Figure BDA00023765986600000218
Figure BDA0002376598660000031
4) solving the subproblem and splicing the solution: solving the subproblem shown in the formula (8) in the step 3) according to the formula (10):
Figure BDA0002376598660000032
in formula (10), C ═ τ I + Lnorm,bj=yj
Figure BDA0002376598660000033
Is the local solution provided by the node k and its neighbors, and the local solutions on all nodes are solved according to the formula (11)
Figure BDA0002376598660000034
Splicing, and then averaging:
Figure BDA0002376598660000035
in formula (11)
Figure BDA0002376598660000036
Is an approximation to the solution of the global problem of equation (5);
5) and (3) iterative solution:
Figure BDA0002376598660000037
has an error from the actual solution, so that an iterative calculation method is adopted according to the formula (12)
Figure BDA0002376598660000038
Approaching the solution of the global problem as equation (5):
Figure BDA0002376598660000039
6) distributed solution: for node k and corresponding subgraph
Figure BDA00023765986600000310
Comprises the following steps:
6-1)bjis projected to the subgraph
Figure BDA00023765986600000311
To obtain bj,kLet it be a subgraph
Figure BDA00023765986600000312
Initial signal value of upper node, i.e.
Figure BDA00023765986600000313
6-2) operator P is projected onto the subgraph
Figure BDA00023765986600000314
The corresponding information vector pkTo represent;
6-3) calculating
Figure BDA00023765986600000315
It is the value of the local approximate iterative solution at node k;
6-4) calculation
Figure BDA00023765986600000316
Updating the value of the local approximate solution on the node k by using the local approximate iterative solution;
6-5) making the node k exchange information with the neighbor nodes in the 2r to generate a subgraph
Figure BDA00023765986600000317
Partial solution of vj,k
6-6) calculation
Figure BDA00023765986600000318
6-7) again, this time with the exchange of information
Figure BDA00023765986600000319
In the corresponding subgraph
Figure BDA00023765986600000320
The value on the node, the initial value of the next iteration is generated
Figure BDA00023765986600000321
6-8) judging the termination condition | vj(k) If | < ε is reached: if the end condition | v is reachedj(k) If | < ε, then all nodes are processed
Figure BDA0002376598660000041
Spliced together to generate propagation result f of label jjAnd starting the propagation of the next label information; if the termination condition | v is not reachedj(k) If | < ε, proceed to the initial value
Figure BDA0002376598660000042
The next iteration.
According to the technical scheme, a global graph semi-supervised learning problem is converted into a series of simpler local problems by adopting subgraph division, then each simple local problem is distributed to a corresponding node in a graph, iterative solution is carried out through all nodes in the graph, and a global approximate result is solved.
The method has short training time and high acquisition speed of data used for training, and the calculation result is obtained by semi-supervised learning under large-scale data and is consistent with a centralized result.
Drawings
FIG. 1 is a schematic diagram of iterative convergence in Minnesota data for an example method;
fig. 2 is a schematic diagram of an iterative convergence situation of the MNIST data according to the embodiment.
Detailed Description
The invention will be further elucidated with reference to the drawings and examples, without however being limited thereto.
Example (b):
1) constructing a graph: the data set for semi-supervised learning is
Figure BDA0002376598660000043
The data set has N samples, and the labels in the data set are all from
Figure BDA0002376598660000044
With a class c tag, where { x1,x2,...xlThe label information of is known, the corresponding label information is
Figure BDA0002376598660000045
And { xl+1,...,xnThe tag information of } is unknown, according to
Figure BDA0002376598660000046
The similarity of the samples in the drawing is established
Figure BDA0002376598660000047
Figure BDA0002376598660000048
And E are respectively a node set and an edge set,
Figure BDA0002376598660000049
each node in the E group corresponds to each sample in the data set, and the E group comprises the connection information of the nodes;
2) modeling an optimization problem: representing label information of a data set to be processed into a graph signal f ═ f1,…,fn]TThe signal value is a label of the corresponding sample, and the optimization problem of semi-supervised learning of the graph is defined as follows:
Figure BDA00023765986600000410
Figure BDA00023765986600000411
the method comprises the following steps that (1) information of each type of label is transmitted to a sample with unknown label information, then, a final classification result is extracted by a formula (2), F in the formula (1) is a classification matrix, Y is a known label information matrix which are both N multiplied by c, Y is generated through a data set F to be processed, and the generation mode is as follows:
Figure BDA0002376598660000051
in the formula (1), the first and second groups,
Figure BDA0002376598660000052
for the matching term, τ is the weighting factor,
Figure BDA0002376598660000053
is set as a penalty item
Figure BDA0002376598660000054
Wherein L isnorm=I-D-1/2WD-1/2Normalizing the Laplace matrix for the graph, wherein I is a unit matrix, D is a degree matrix, and W is an adjacency matrix, and the propagation of the jth label information in the formula (1) is expressed as:
Figure BDA0002376598660000055
equation (4) can be expressed as:
Figure BDA0002376598660000056
f in formula (5)j=F:,j,yj=Y:,j
Figure BDA0002376598660000057
Is fjTransposing;
3) sub-graph division and optimization problem decomposition: using indicator operators
Figure BDA0002376598660000058
To the picture
Figure BDA0002376598660000059
Sub-graph partitioning to indicate operators
Figure BDA00023765986600000510
Is a diagonal matrix and is defined as:
Figure BDA00023765986600000511
b (k,2r) in equation (7) is a set of nodes that contains k and neighbor nodes within a radius of 2r, for node k,
Figure BDA00023765986600000512
dividing a subgraph taking k as a center and taking a node set as B (k,2r) for a node k
Figure BDA00023765986600000513
For a graph with N nodes, there are N such subgraphs, and the optimization problem as shown in equation (5) is projected onto each subgraph, for each node k and the corresponding subgraph
Figure BDA00023765986600000514
There is a corresponding optimization sub-problem as shown in equation (8):
Figure BDA00023765986600000515
f in formula (8)j,kIs f in the subfigure
Figure BDA00023765986600000516
The projection of (2) is performed,
Figure BDA00023765986600000517
Figure BDA00023765986600000518
4) solving the subproblem and splicing the solution: solving the subproblem shown in the formula (8) in the step 3) according to the formula (10):
Figure BDA00023765986600000519
in formula (10), C ═ τ I + Lnorm,bj=yj
Figure BDA00023765986600000520
Is the local solution provided by the node k and its neighbors, and the local solutions on all nodes are solved according to the formula (11)
Figure BDA0002376598660000061
Splicing together, and then obtainingAveraging:
Figure BDA0002376598660000062
in formula (11)
Figure BDA0002376598660000063
Is an approximation to the solution of the global problem of equation (5);
5) and (3) iterative solution:
Figure BDA0002376598660000064
has an error from the actual solution, so that an iterative calculation method is adopted according to the formula (12)
Figure BDA0002376598660000065
Approaching the solution of the global problem as equation (5):
Figure BDA0002376598660000066
in the formula (12), the first and second groups,
Figure BDA0002376598660000067
m in the upper right corner is the number of iterations,
Figure BDA0002376598660000068
Figure BDA0002376598660000069
6) distributed solution: for node k and corresponding subgraph
Figure BDA00023765986600000610
Comprises the following steps:
6-1)bjis projected to the subgraph
Figure BDA00023765986600000611
To obtain bj,kLet it be a subgraph
Figure BDA00023765986600000612
Initial signal value of upper node, i.e.
Figure BDA00023765986600000613
6-2) operator P is projected onto the subgraph
Figure BDA00023765986600000614
The corresponding information vector pkTo represent;
6-3) calculating
Figure BDA00023765986600000615
It is the value of the local approximate iterative solution at node k;
6-4) calculation
Figure BDA00023765986600000616
Updating the value of the local approximate solution on the node k by using the local approximate iterative solution;
6-5) making the node k exchange information with the neighbor nodes in the 2r to generate a subgraph
Figure BDA00023765986600000617
Partial solution of vj,k
6-6) calculation
Figure BDA00023765986600000618
Preparing for the next iteration;
6-7) again, this time with the exchange of information
Figure BDA00023765986600000619
In the corresponding subgraph
Figure BDA00023765986600000620
The value on the node, the initial value of the next iteration is generated
Figure BDA00023765986600000621
6-8) judging the termination condition | vj(k) If | less epsilon is reached: if the end condition | v is reachedj(k) If | < ε, then all nodes are processed
Figure BDA00023765986600000622
Spliced together to generate propagation result f of label jjAnd starting the propagation of the next label information; if the termination condition | v is not reachedj(k) If | < ε, proceed to the initial value
Figure BDA00023765986600000623
The next iteration.
Simulation example 1:
in this example, the simulation learning is performed by using Minnesota traffic data, which has 2642 data points and only +1 and-1 values, so that the Minnesota traffic data can be regarded as a database containing two types of tags, before the simulation, the tag signals are sampled to generate data sets with different known tag data ratios, and for different situations, data sets with known tag ratios of 1%, 2%, 5%, 10%, 20% and 50% are generated. The comparison of classification performance in Minnesota data by using the method of the present embodiment and the centralized calculation method, including the classification accuracy and the calculation time, is shown in table 1.
TABLE 1
Ratio of 1% 2% 5% 10% 20% 50%
Accuracy rate 90.29% 93.94% 95.88% 97.38% 98.19% 99.11%
Time consuming(s) 0.2532 0.2782 0.2991 0.3181 0.3439 0.3847
Centralized accuracy 90.65% 93.96% 95.88% 97.38% 98.19% 99.11%
Centralized time consuming(s) 0.3565 0.3886 0.4055 0.3835 0.408 0.4115
The simulation results in table 1 show that in Minnesota data, the error between the method of the present embodiment and the conventional centralized method is slightly small when the label ratio is known to be low, and gradually becomes smaller as the ratio increases, and when the ratio reaches a certain degree, the results of the two methods are the same, as shown in fig. 1, the method of the present embodiment can converge within a limited number of iterations, the method of the present embodiment can produce a better approximate result, and the time consumed by the method of the present embodiment is short.
Simulation example 2:
in this example, MNIST data set is used for simulation learning, MNIST is a hand-written digital character data set, MNIST data content is 0-9 hand-written digital pictures, 10000 pictures in MNIST are taken out for simulation in this example, firstly, Euclidean distance is used for measuring similarity between pictures, and then, kNN algorithm is used for constructing a picture of a database
Figure BDA0002376598660000071
Finally, sampling the label information to generate data sets with different label information ratios, and generating the data sets with label ratios of 1%, 2%, 5%, 10%, 20% and 50% respectively, which are the same as the simulation example 1. The comparison of the classification performance in the MNIST data by using the present example method and the centralized calculation method, including the classification accuracy and the calculation time, is shown in table 2.
TABLE 2
Ratio of 1% 2% 5% 10% 20% 50%
Accuracy rate 85.89% 89.27% 91.84% 93.28% 94.86% 97.33%
Time consuming(s) 3.41 3.84 4.25 4.68 4.97 5.53
Centralized accuracy 85.97% 89.27% 91.84% 93.28% 94.86% 97.33%
Centralized time consuming(s) 16.17 16.25 16.39 16.24 16.26 16.38
The simulation results in table 2 show that, in MNIST data, the error variation trends of the method of the present embodiment and the method of the centralized type are consistent with those of the simulation example 1, and both of the error variation trends become smaller with the increase of the proportion of labeled data, and the effects of the two are consistent, but in the simulation example, the calculation speed of the method of the present embodiment is faster or faster than that of the method of the centralized type, as shown in fig. 2, the curve in fig. 2 shows that not only can the method of the present embodiment converge within the limited number of iterations, but also the number of steps required for convergence is less than that in the simulation case 1, which shows that the method of the present embodiment can show advantages under large-scale data.

Claims (1)

1. A distributed implementation method of semi-supervised learning of a graph based on sub-graph partitioning is characterized by comprising the following steps:
1) constructing a graph: the data set for semi-supervised learning is
Figure FDA0002376598650000011
There are a total of N samples, x, in the datasetnRepresenting the nth sample, the labels in the data set all come from
Figure FDA0002376598650000012
With a class c tag, where { x1,x2,...xlThe label information of is known, the corresponding label information is
Figure FDA0002376598650000013
And { xl+1,...,xnThe tag information of } is unknown, according to
Figure FDA0002376598650000014
The similarity of the samples in the drawing is established
Figure FDA0002376598650000015
Figure FDA0002376598650000016
And E are respectively a node set and an edge set,
Figure FDA0002376598650000017
each node in the E group corresponds to each sample in the data set, and the E group comprises the connection information of the nodes;
2) modeling an optimization problem: representing label information of a data set to be processed into a graph signal f ═ f1,…,fn]TThe signal value is a label of the corresponding sample, and the optimization problem of semi-supervised learning of the graph is defined as follows:
Figure FDA0002376598650000018
Figure FDA0002376598650000019
the method comprises the following steps that (1) information of each type of label is transmitted to a sample with unknown label information, then, a final classification result is extracted by a formula (2), F in the formula (1) is a classification matrix, Y is a known label information matrix which are both N multiplied by c, Y is generated through a data set F to be processed, and the generation mode is as follows:
Figure FDA00023765986500000110
in the formula (1), the first and second groups,
Figure FDA00023765986500000111
for the matching term, τ is the weighting factor,
Figure FDA00023765986500000112
is set as a penalty item
Figure FDA00023765986500000113
Wherein L isnorm=I-D-1/2WD-1/2Normalizing the Laplace matrix for the graph, wherein I is a unit matrix, D is a degree matrix, and W is an adjacency matrix, and the propagation of the jth label information in the formula (1) is expressed as:
Figure FDA00023765986500000114
equation (4) can be expressed as:
Figure FDA00023765986500000115
f in formula (5)j=F:,j,yj=Y:,j
Figure FDA00023765986500000116
Is fjTransposing;
3) sub-graph division and optimization problem decomposition: using indicator operators
Figure FDA00023765986500000117
To the picture
Figure FDA00023765986500000118
Sub-graph partitioning to indicate operators
Figure FDA00023765986500000119
Is a diagonal matrix and is defined as:
Figure FDA0002376598650000021
b (k,2r) in equation (7) is a set of nodes that contains k and neighbor nodes within a radius of 2r, for node k,
Figure FDA0002376598650000022
dividing node k into a number k ofSubgraph with core and node set as B (k,2r)
Figure FDA0002376598650000023
For a graph with N nodes, there are N such subgraphs, and the optimization problem as shown in equation (5) is projected onto each subgraph, for each node k and the corresponding subgraph
Figure FDA0002376598650000024
There is a corresponding optimization sub-problem as shown in equation (8):
Figure FDA0002376598650000025
f in formula (8)j,kIs f in the subfigure
Figure FDA0002376598650000026
The projection of (2) is performed,
Figure FDA0002376598650000027
Figure FDA0002376598650000028
4) solving the subproblem and splicing the solution: solving the subproblem shown in the formula (8) in the step 3) according to the formula (10):
Figure FDA0002376598650000029
in formula (10), C ═ τ I + Lnorm,bj=yj
Figure FDA00023765986500000210
Is the local solution provided by the node k and its neighbors, and the local solutions on all nodes are solved according to the formula (11)
Figure FDA00023765986500000211
Splicing, and then averaging:
Figure FDA00023765986500000212
in formula (11)
Figure FDA00023765986500000213
Is an approximation to the solution of the global problem of equation (5);
5) and (3) iterative solution: according to the formula (12), an iterative calculation mode is adopted to ensure that
Figure FDA00023765986500000214
Approaching the solution of the global problem as equation (5):
Figure FDA00023765986500000215
in the formula (12), the first and second groups,
Figure FDA00023765986500000216
m in the upper right corner is the number of iterations,
Figure FDA00023765986500000217
Figure FDA00023765986500000218
6) distributed solution: for node k and corresponding subgraph
Figure FDA00023765986500000219
Comprises the following steps:
6-1)bjis projected to the subgraph
Figure FDA00023765986500000220
To obtain bj,kLet it be a subgraph
Figure FDA00023765986500000221
Initial signal value of upper node, i.e.
Figure FDA0002376598650000031
6-2) operator P is projected onto the subgraph
Figure FDA0002376598650000032
The corresponding information vector pkTo represent;
6-3) calculating
Figure FDA0002376598650000033
It is the value of the local approximate iterative solution at node k;
6-4) calculation
Figure FDA0002376598650000034
Updating the value of the local approximate solution on the node k by using the local approximate iterative solution;
6-5) making the node k exchange information with the neighbor nodes in the 2r to generate a subgraph
Figure FDA0002376598650000035
Partial solution of vj,k
6-6) calculation
Figure FDA0002376598650000036
6-7) again, this time with the exchange of information
Figure FDA0002376598650000037
In the corresponding subgraph
Figure FDA0002376598650000038
The value on the node, the initial value of the next iteration is generated
Figure FDA0002376598650000039
6-8) judging the termination condition | vj(k) If | < ε is reached: if the end condition | v is reachedj(k) If | < ε, then all nodes are processed
Figure FDA00023765986500000310
Spliced together to generate propagation result f of label jjAnd starting the propagation of the next label information; if the termination condition | v is not reachedj(k) If | < ε, proceed to the initial value
Figure FDA00023765986500000311
The next iteration.
CN202010068356.4A 2020-01-21 2020-01-21 Sub-graph division based distributed implementation method for semi-supervised learning of graph Pending CN111275201A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010068356.4A CN111275201A (en) 2020-01-21 2020-01-21 Sub-graph division based distributed implementation method for semi-supervised learning of graph

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010068356.4A CN111275201A (en) 2020-01-21 2020-01-21 Sub-graph division based distributed implementation method for semi-supervised learning of graph

Publications (1)

Publication Number Publication Date
CN111275201A true CN111275201A (en) 2020-06-12

Family

ID=71001851

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010068356.4A Pending CN111275201A (en) 2020-01-21 2020-01-21 Sub-graph division based distributed implementation method for semi-supervised learning of graph

Country Status (1)

Country Link
CN (1) CN111275201A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112417188A (en) * 2020-12-10 2021-02-26 桂林电子科技大学 Hyperspectral image classification method based on graph model

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112417188A (en) * 2020-12-10 2021-02-26 桂林电子科技大学 Hyperspectral image classification method based on graph model

Similar Documents

Publication Publication Date Title
WO2022077646A1 (en) Method and apparatus for training student model for image processing
CN114021799A (en) Day-ahead wind power prediction method and system for wind power plant
CN112668579A (en) Weak supervision semantic segmentation method based on self-adaptive affinity and class distribution
CN108519971B (en) Cross-language news topic similarity comparison method based on parallel corpus
Yang et al. Efficient and robust MultiView clustering with anchor graph regularization
CN112395487A (en) Information recommendation method and device, computer-readable storage medium and electronic equipment
CN115080749B (en) Weak supervision text classification method, system and device based on self-supervision training
CN113392191B (en) Text matching method and device based on multi-dimensional semantic joint learning
CN112348001B (en) Training method, recognition method, device, equipment and medium for expression recognition model
CN111275201A (en) Sub-graph division based distributed implementation method for semi-supervised learning of graph
CN113408301A (en) Sample processing method, device, equipment and medium
CN111326215B (en) Method and system for searching nucleic acid sequence based on k-tuple frequency
WO2022162427A1 (en) Annotation-efficient image anomaly detection
CN111241326B (en) Image visual relationship indication positioning method based on attention pyramid graph network
CN117060401A (en) New energy power prediction method, device, equipment and computer readable storage medium
CN111339258A (en) University computer basic exercise recommendation method based on knowledge graph
CN107622048B (en) Text mode recognition method and system
CN115759254A (en) Question-answering method, system and medium based on knowledge-enhanced generative language model
CN113032443A (en) Method, apparatus, device and computer-readable storage medium for processing data
CN113011597A (en) Deep learning method and device for regression task
CN117056550B (en) Long-tail image retrieval method, system, equipment and storage medium
CN114896436B (en) Network structure searching method based on characterization mutual information
CN112131446B (en) Graph node classification method and device, electronic equipment and storage medium
CN112766380B (en) Image classification method and system based on feature gain matrix incremental learning
Li et al. Specific emitter identification based on signal-graph capsule network (SGCN)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200612

RJ01 Rejection of invention patent application after publication