CN108833158A - A kind of similitude community discovery method based on k-means - Google Patents
A kind of similitude community discovery method based on k-means Download PDFInfo
- Publication number
- CN108833158A CN108833158A CN201810593224.6A CN201810593224A CN108833158A CN 108833158 A CN108833158 A CN 108833158A CN 201810593224 A CN201810593224 A CN 201810593224A CN 108833158 A CN108833158 A CN 108833158A
- Authority
- CN
- China
- Prior art keywords
- node
- cluster
- degree
- cluster centre
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/12—Discovery or management of network topologies
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
Abstract
The present invention relates to a kind of similitude community discovery method based on k-means.The present invention has abandoned the method for randomly selecting initial cluster center of traditional k-means clustering method, initial cluster center is finally determined in conjunction with similitude, node density and the node degree between node, it avoids and randomly selects initial cluster center and cause cluster result unstable, isolated node is avoided as initial cluster center, and it can guarantee that the K initial cluster center point found out will not be excessively close and causes Clustering Effect bad, the number of iterations in cluster process is not only reduced, and improves the accuracy of community's division.Meanwhile present invention enhances the Euclidean distances of each node of Jaccard Similarity measures between node, and it can be well by node division to respective community, the accuracy that community divides is high.
Description
Technical field
The present invention relates to network community discovery methods, and in particular to a kind of similitude community discovery side based on k-means
Method.
Background technique
At present community structure discovery have become complex network research in a hot issue, in recent years by computer,
The extensive concern of the area researches person such as mathematics, biology and sociology, such as the mapping knowledge domains of ongoing research area just combine
The theory of community discovery.And the presence of community structure is had found in these networks, the discovery of complex network community structure
There is important theory significance and practical value for the Analysis of Topological Structure of complex network, functional analysis and behavior prediction.
In recent years, many community detection methods are proposed out in succession, these algorithms are substantially divided into three classes:It is based on
The Hierarchical Segmentation method of figure, the method based on cluster and the method based on optimization.Cluster is the tradition of detection network community structure
Method, algorithm partition the network into several subgroups based on the similitude or intensity connected between each node, and application is more extensive
Clustering method early stage have Girvan et al. propose the GN algorithm based on side betweenness, LPA and based on the partial approach of degree in community
Also have in it was found that and is widely applied very much.
K-means clustering algorithm is the clustering algorithm based on division proposed by MacQueen, is one kind typically without prison
Learning algorithm is superintended and directed, in data mining and machine learning field using very extensive, k-means clustering algorithm is based on data sample
Between similarity measurement clustered, its advantage is that simple and fast, complexity is low, is easily processed the data that are on a grand scale.But k-
The performance of means clustering algorithm depends on the position of initial cluster center, the selection of initial cluster center largely shadow
The quality of cluster result is rung.Meanwhile the degree of association of nodes is to the convergence rate and accuracy of k-means clustering algorithm
Also have a great impact.Suitable initial cluster center is selected, the suitable nodes degree of association is defined, it can be effective
The number of iterations for reducing k-means clustering algorithm, improves the accuracy of community division result.
Summary of the invention
Technical problem to be solved by the invention is to provide a kind of the similitude community discovery method based on k-means, solution
The certainly inadequate problem of the accuracy of community division result.
The technical solution that the present invention solves above-mentioned technical problem is as follows:A kind of similitude community discovery based on k-means
Method includes the following steps:
S1, the topological structure of input network is set as G={ V, E }, V and E are respectively the set on node and side, input network institute
The community number K to be divided;
S2, calculate node densityNode degree Degree (vi), and calculate node density
With node degree Degree (vi) product DD (vi);
S3, the DD (v to all nodesi) carry out descending arrange to obtain sequence D DSeq (G), will be in sequence D DSeq (G)
First node is added in cluster centre node collection Seed (v) as first start node;
S4, pass through node densityIt is similar to go out network G interior joint for Jaccard Similarity measures between node
Spend matrix J accard (G) and degree of association matrix D DJ (G);
S5, by degree of association matrix D DJ (G) calculate node collection S (v) with it is each in cluster centre node collection Seed (v)
Node is averaged the smallest node of the degree of association, and the node is added in node collection MinMean (v), node collection MinMean (v)
Middle DD (vi) maximum node is added in cluster centre node collection Seed (v) as new cluster centre;
Wherein, node collection S (v) is the node inputted in network G removing cluster centre node collection Seed (v) after all nodes
Collection, i.e. S (v)=G (v)-Seed (v), G (v) are all node sets in network G;
S6, when cluster centre node collection Seed (v) interior joint quantity be equal to the network community number K to be divided, obtain K
When a cluster centre, S7 is entered step, otherwise return step S5;
S7, K cluster centre and similarity matrix Jaccard (G) are iterated by k-means algorithm, are obtained new
Cluster centre and K cluster Cluster (1), Cluster (2) ..., Cluster (K);
S8, reach maximum number of iterations less than threshold value m=1.0 or k-means algorithm apart from knots modification when cluster centre
When, S9 is entered step, otherwise return step S7;
S9, the cluster for meeting step S8 is classified as to a community, obtained community Com (1), Com (2) ..., Com (K).
Based on the above technical solution, the present invention can also be improved as follows.
Further, the step S2 the specific steps are:
S21, by the node v in network GiAnd with viFor starting point forward k-hop point form subgraph G', calculate node density
In formula (1), i is node serial number, and k is with viFor the forward direction hop count of starting point, V' is the collection of subgraph G' interior joint
It closes, | V'| is the quantity of V' interior joint, and E' is the set on side in subgraph G', | E'| is the quantity on side in E';
Node v in S22, calculating network GiNode degree Degree (vi), node degree Degree (vi) be and node viPhase
The quantity on side even;
S23, calculate node densityWith node degree Degree (vi) product DD (vi):
Further, the step S4 the specific steps are:
S41, pass through the node density of all nodesConstruct the n-dimensional vector of node density
In formula (3), k=3;
S42, building network G interior joint vi~vnN tie up similarity matrix Jaccard (G):
In formula (4), λ=10, JacSim (vi,vj) it is node viWith node vjJaccard related coefficient,
JacSim(vi,vj) calculation formula be:
In formula (5), Γ (vi) and Γ (vj) it is respectively node viWith node vjNeighbor node set, | Γ (vi)∩
Γ(vj) | it is node viWith node vjCommon neighbours' quantity, | Γ (vi)∪Γ(vj) | it is node viWith node vjJoint it is adjacent
Occupy quantity;
S43, the n-dimensional vector by node densityWith similarity matrix Jaccard (G) building network G interior joint
Degree of association matrix D DJ (G):
Further, the step S5 interior joint is averaged degree of association RpCalculation formula be:
In formula (7), RqpFor node vpWith cluster centre sqThe node degree of association, q=1,2 ..., | Seed (v) |, p
=1,2 ..., | S (v) |, | Seed (v) | for the number of nodes in cluster centre node set Seed (v).
Further, the step S7 the specific steps are:
S71, the similarity Euclidean distance d (jv for calculating network G interior joint and K cluster centrea,jvb), calculation formula is:
In formula (8), jvaAnd jvbRespectively node vaWith node vbIt is corresponding in similarity matrix Jaccard (G)
Similarity vector;
S72, by each node division to the smallest cluster centre of Euclidean distance, obtain K cluster Cluster (1),
Cluster(2),…,Cluster(K);
S73, the cluster centre for calculating each cluster, obtain new cluster centre, new cluster centre CKCalculation formula be:
In formula (9), Jaccard (G)nKFor k-th cluster interior joint vnIt is corresponding in similarity matrix Jaccard (G)
That row vector, n=1,2 ..., | Clustr (K) |, | Cluster (K) | be k-th cluster interior joint quantity.
Further, the maximum number of iterations in the step S8 is 40000.
The beneficial effects of the invention are as follows:It is initial poly- that the present invention has abandoned randomly selecting for traditional k-means clustering method
The method at class center finally determines initial cluster center in conjunction with similitude, node density and the node degree between node, avoids
Randomly selecting initial cluster center causes cluster result unstable, avoids isolated node as initial cluster center, and
It can guarantee that the K initial cluster center point found out will not be excessively close and causes Clustering Effect bad, not only reduce and clustered
The number of iterations in journey, and improve the accuracy of community's division.Meanwhile present invention enhances Jaccard similitudes between node
The Euclidean distance of each node is calculated, can be well by node division to respective community, the accuracy that community divides is high.
Detailed description of the invention
Fig. 1 is general flow chart of the present invention;
Fig. 2 is the flow chart of step S2 of the present invention;
Fig. 3 is the flow chart of step S4 of the present invention;
Fig. 4 is the flow chart of step S7 of the present invention;
Fig. 5 is the example network and node subgraph of node density of the present invention;
Fig. 6 is distribution schematic diagram of the node density of the present invention in Zachary ' s karate club;
Fig. 7 is that distribution of the product of node density of the present invention and node degree in Zachary ' s karate club is illustrated
Figure.
Specific embodiment
The principle and features of the present invention will be described below with reference to the accompanying drawings, and the given examples are served only to explain the present invention, and
It is non-to be used to limit the scope of the invention.
As shown in Figure 1, a kind of similitude community discovery method based on k-means,
S1, the topological structure of input network is set as G={ V, E }, V and E are respectively the set on node and side, input network institute
The community number K to be divided;
S2, calculate node densityNode degree Degree (vi), and calculate node density
With node degree Degree (vi) product DD (vi);
S3, the DD (v to all nodesi) carry out descending arrange to obtain sequence D DSeq (G), will be in sequence D DSeq (G)
First node is added in cluster centre node collection Seed (v) as first start node;
S4, pass through node densityIt is similar to go out network G interior joint for Jaccard Similarity measures between node
Spend matrix J accard (G) and degree of association matrix D DJ (G);
S5, by degree of association matrix D DJ (G) calculate node collection S (v) with it is each in cluster centre node collection Seed (v)
Node is averaged the smallest node of the degree of association, and the node is added in node collection MinMean (v), node collection MinMean (v)
Middle DD (vi) maximum node is added in cluster centre node collection Seed (v) as new cluster centre;
Wherein, node collection S (v) is the node inputted in network G removing cluster centre node collection Seed (v) after all nodes
Collection, i.e. S (v)=G (v)-Seed (v), G (v) are all node sets in network G;
S6, when cluster centre node collection Seed (v) interior joint quantity be equal to the network community number K to be divided, obtain K
When a cluster centre, S7 is entered step, otherwise return step S5;
S7, K cluster centre and similarity matrix Jaccard (G) are iterated by k-means algorithm, are obtained new
Cluster centre and K cluster Cluster (1), Cluster (2) ..., Cluster (K);
S8, reach maximum number of iterations less than threshold value m=1.0 or k-means algorithm apart from knots modification when cluster centre
When, S9 is entered step, otherwise return step S7;
Wherein, maximum number of iterations 40000;
S9, the cluster for meeting step S8 is classified as to a community, obtained community Com (1), Com (2) ..., Com (K).
As shown in Fig. 2, step S2 the specific steps are:
S21, as shown in figure 5, (a) figure is the example network of node density, (b) figure is the 2 of (a) figure interior joint 1 to jump subgraphs,
By the node v in network GiAnd with viFor starting point forward k-hop point form subgraph G', calculate node density
In formula (1), i is node serial number, and k is with viFor the forward direction hop count of starting point, V' is the collection of subgraph G' interior joint
It closes, | V'| is the quantity of V' interior joint, and E' is the set on side in subgraph G', | E'| is the quantity on side in E';
Node v in S22, calculating network GiNode degree Degree (vi), node degree Degree (vi) be and node viPhase
The quantity on side even;
S23, calculate node densityWith node degree Degree (vi) product DD (vi):
As shown in figure 3, step S4 the specific steps are:
S41, pass through the node density of all nodesConstruct the n-dimensional vector of node density
In formula (3), k=3;
S42, building network G interior joint vi~vnN tie up similarity matrix Jaccard (G):
In formula (4), λ=10, JacSim (vi,vj) it is node viWith node vjJaccard related coefficient,
JacSim(vi,vj) calculation formula be:
In formula (5), Γ (vi) and Γ (vj) it is respectively node viWith node vjNeighbor node set, | Γ (vi)∩
Γ(vj) | it is node viWith node vjCommon neighbours' quantity, | Γ (vi)∪Γ(vj) | it is node viWith node vjJoint it is adjacent
Occupy quantity;
S43, the n-dimensional vector by node densityWith similarity matrix Jaccard (G) building network G interior joint
Degree of association matrix D DJ (G):
In embodiments of the present invention, step S5 interior joint is averaged degree of association RpCalculation formula be:
In formula (7), RqpFor node vpWith cluster centre sqThe node degree of association, q=1,2 ..., | Seed (v) |, p
=1,2 ..., | S (v) |, | Seed (v) | for the number of nodes in cluster centre node set Seed (v).
As shown in figure 4, step S7 the specific steps are:
S71, the similarity Euclidean distance d (jv for calculating network G interior joint and K cluster centrea,jvb), calculation formula is:
In formula (8), jvaAnd jvbRespectively node vaWith node vbIt is corresponding in similarity matrix Jaccard (G)
Similarity vector;
S72, by each node division to the smallest cluster centre of Euclidean distance, obtain K cluster Cluster (1),
Cluster(2),…,Cluster(K);
S73, the cluster centre for calculating each cluster, obtain new cluster centre, new cluster centre CKCalculation formula be:
In formula (9), Jaccard (G)nKFor k-th cluster interior joint vnIt is corresponding in similarity matrix Jaccard (G)
That row vector, n=1,2 ..., | Clustr (K) |, | Cluster (K) | be k-th cluster interior joint quantity.
As shown in fig. 6, point for node density of the present invention in k=2 in Zachary ' s karate club network
Cloth signal, node is bigger, and its node density value of expression is bigger.
As shown in fig. 7, the product of node density and node degree for the present invention in k=2 is in Zachary ' s karate
Distribution signal in club network, node is bigger, and the product value for indicating its node density and node degree is bigger.
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all in spirit of the invention and
Within principle, any modification, equivalent replacement, improvement and so on be should all be included in the protection scope of the present invention.
Claims (6)
1. a kind of similitude community discovery method based on k-means, which is characterized in that include the following steps:
S1, the topological structure of input network is set as G={ V, E }, V and E are respectively the set on node and side, input network to be drawn
The community number K divided;
S2, calculate node densityWith node degree Degree (vi), and calculate node densityWith
Node degree Degree (vi) product DD (vi);
S3, the DD (v to all nodesi) carry out descending arrange to obtain sequence D DSeq (G), by first in sequence D DSeq (G)
Node is added in cluster centre node collection Seed (v) as first start node;
S4, pass through node densityJaccard Similarity measures go out network G interior joint similarity moment between node
Battle array Jaccard (G) and degree of association matrix D DJ (G);
S5, by degree of association matrix D DJ (G) calculate node collection S (v) with node each in cluster centre node collection Seed (v)
The average the smallest node of the degree of association, and the node is added in node collection MinMean (v), DD in node collection MinMean (v)
(vi) maximum node is added in cluster centre node collection Seed (v) as new cluster centre;
Wherein, node collection S (v) is the node collection inputted in network G removing cluster centre node collection Seed (v) after all nodes,
That is S (v)=G (v)-Seed (v), G (v) are all node sets in network G;
S6, when cluster centre node collection Seed (v) interior joint quantity is equal to the network community number K to be divided, obtain K and gather
When class center, S7 is entered step, otherwise return step S5;
S7, K cluster centre and similarity matrix Jaccard (G) are iterated by k-means algorithm, obtain new gather
Class center and K cluster Cluster (1), Cluster (2) ..., Cluster (K);
S8, when cluster centre is less than threshold value m=1.0 or k-means algorithm apart from knots modification and reaches maximum number of iterations,
S9 is entered step, otherwise return step S7;
S9, the cluster for meeting step S8 is classified as to a community, obtained community Com (1), Com (2) ..., Com (K).
2. the similitude community discovery method according to claim 1 based on k-means, which is characterized in that the step
S2 the specific steps are:
S21, by the node v in network GiAnd with viFor starting point forward k-hop point form subgraph G', calculate node density
In formula (1), i is node serial number, and k is with viFor the forward direction hop count of starting point, V' is the set of subgraph G' interior joint, | V'
| it is the quantity of V' interior joint, E' is the set on side in subgraph G', | E'| is the quantity on side in E';
Node v in S22, calculating network GiNode degree Degree (vi), node degree Degree (vi) be and node viConnected
The quantity on side;
S23, calculate node densityWith node degree Degree (vi) product DD (vi):
3. the similitude community discovery method according to claim 1 based on k-means, which is characterized in that the step
S4 the specific steps are:
S41, pass through the node density of all nodesConstruct the n-dimensional vector of node density
In formula (3), k=3;
S42, building network G interior joint vi~vnN tie up similarity matrix Jaccard (G):
In formula (4), λ=10, JacSim (vi,vj) it is node viWith node vjJaccard related coefficient, JacSim (vi,
vj) calculation formula be:
In formula (5), Γ (vi) and Γ (vj) it is respectively node viWith node vjNeighbor node set, | Γ (vi)∩Γ
(vj) | it is node viWith node vjCommon neighbours' quantity, | Γ (vi)∪Γ(vj) | it is node viWith node vjJoint neighbours
Quantity;
S43, the n-dimensional vector by node densityWith the association of similarity matrix Jaccard (G) building network G interior joint
It spends matrix D DJ (G):
4. the similitude community discovery method according to claim 1 based on k-means, which is characterized in that the step
S5 interior joint is averaged degree of association RpCalculation formula be:
In formula (7), RqpFor node vpWith cluster centre sqThe node degree of association, q=1,2 ..., | Seed (v) |, p=1,
2 ..., | S (v) |, | Seed (v) | for the number of nodes in cluster centre node set Seed (v).
5. the similitude community discovery method according to claim 1 based on k-means, which is characterized in that the step
S7 the specific steps are:
S71, the similarity Euclidean distance d (jv for calculating network G interior joint and K cluster centrea,jvb), calculation formula is:
In formula (8), jvaAnd jvbRespectively node vaWith node vbIt is corresponding similar in similarity matrix Jaccard (G)
Spend vector;
S72, by each node division to the smallest cluster centre of Euclidean distance, obtain K cluster Cluster (1), Cluster
(2),…,Cluster(K);
S73, the cluster centre for calculating each cluster, obtain new cluster centre, new cluster centre CKCalculation formula be:
In formula (9), Jaccard (G)nKFor k-th cluster interior joint vnIn similarity matrix Jaccard (G) it is corresponding that
Row vector, n=1,2 ..., | Clustr (K) |, | Cluster (K) | it is k-th cluster interior joint quantity.
6. the similitude community discovery method according to claim 1 based on k-means, which is characterized in that the step
Maximum number of iterations in S8 is 40000.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810593224.6A CN108833158A (en) | 2018-06-08 | 2018-06-08 | A kind of similitude community discovery method based on k-means |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810593224.6A CN108833158A (en) | 2018-06-08 | 2018-06-08 | A kind of similitude community discovery method based on k-means |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108833158A true CN108833158A (en) | 2018-11-16 |
Family
ID=64144961
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810593224.6A Pending CN108833158A (en) | 2018-06-08 | 2018-06-08 | A kind of similitude community discovery method based on k-means |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108833158A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117808616A (en) * | 2024-02-28 | 2024-04-02 | 中国传媒大学 | Community discovery method and system based on graph embedding and node affinity |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102929942A (en) * | 2012-09-27 | 2013-02-13 | 福建师范大学 | Social network overlapping community finding method based on ensemble learning |
CN103888541A (en) * | 2014-04-01 | 2014-06-25 | 中国矿业大学 | Method and system for discovering cells fused with topology potential and spectral clustering |
-
2018
- 2018-06-08 CN CN201810593224.6A patent/CN108833158A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102929942A (en) * | 2012-09-27 | 2013-02-13 | 福建师范大学 | Social network overlapping community finding method based on ensemble learning |
CN103888541A (en) * | 2014-04-01 | 2014-06-25 | 中国矿业大学 | Method and system for discovering cells fused with topology potential and spectral clustering |
Non-Patent Citations (2)
Title |
---|
HUIJIE YANG,ET AL.: "Edge-content Based Community Detection Algorithm on Email Network", 《 2010 IEEE INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTING AND INTELLIGENT SYSTEMS》 * |
武龙举: "基于复杂网络的社区发现算法研究", 《中国优秀硕士学位论文全文库》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117808616A (en) * | 2024-02-28 | 2024-04-02 | 中国传媒大学 | Community discovery method and system based on graph embedding and node affinity |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhang et al. | WOCDA: A whale optimization based community detection algorithm | |
CN110378366A (en) | A kind of cross-domain image classification method based on coupling knowledge migration | |
CN108334580A (en) | A kind of community discovery method of combination link and attribute information | |
CN106991295B (en) | A kind of protein network module method for digging based on multiple-objection optimization | |
Salama et al. | A novel ant colony algorithm for building neural network topologies | |
Zhou et al. | A density based link clustering algorithm for overlapping community detection in networks | |
Li et al. | A link clustering based memetic algorithm for overlapping community detection | |
You et al. | Early-bird gcns: Graph-network co-optimization towards more efficient gcn training and inference via drawing early-bird lottery tickets | |
CN113297429B (en) | Social network link prediction method based on neural network architecture search | |
CN108268603A (en) | A kind of community discovery method based on core member's identification | |
CN110263236A (en) | Social network user multi-tag classification method based on dynamic multi-view learning model | |
CN106780501A (en) | Based on the image partition method for improving artificial bee colony algorithm | |
CN103164487B (en) | A kind of data clustering method based on density and geological information | |
Hu et al. | A new algorithm CNM-Centrality of detecting communities based on node centrality | |
CN108833158A (en) | A kind of similitude community discovery method based on k-means | |
Chehreghani | Efficient computation of pairwise minimax distance measures | |
CN112464107B (en) | Social network overlapping community discovery method and device based on multi-label propagation | |
CN108287866A (en) | Community discovery method based on node density in a kind of large scale network | |
CN112925991A (en) | Community detection method based on similarity between nodes in social network | |
CN108388769A (en) | The protein function module recognition method of label propagation algorithm based on side driving | |
Peng et al. | Graphangel: Adaptive and structure-aware sampling on graph neural networks | |
Hu et al. | An algorithm Walktrap-SPM for detecting overlapping community structure | |
CN109218184A (en) | Router home AS recognition methods based on port and structural information | |
CN109033746A (en) | A kind of protein complex recognizing method based on knot vector | |
CN114817653A (en) | Unsupervised community discovery method based on central node graph convolutional network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181116 |
|
RJ01 | Rejection of invention patent application after publication |