CN112529148B - Intelligent QoS inference method based on graph neural network - Google Patents

Intelligent QoS inference method based on graph neural network Download PDF

Info

Publication number
CN112529148B
CN112529148B CN202011311810.0A CN202011311810A CN112529148B CN 112529148 B CN112529148 B CN 112529148B CN 202011311810 A CN202011311810 A CN 202011311810A CN 112529148 B CN112529148 B CN 112529148B
Authority
CN
China
Prior art keywords
gnn
network
inference
node
qos
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011311810.0A
Other languages
Chinese (zh)
Other versions
CN112529148A (en
Inventor
黄万伟
张建伟
黄敏
王博
孙海燕
陈明
张王卫
袁博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou University of Light Industry
Original Assignee
Zhengzhou University of Light Industry
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou University of Light Industry filed Critical Zhengzhou University of Light Industry
Priority to CN202011311810.0A priority Critical patent/CN112529148B/en
Publication of CN112529148A publication Critical patent/CN112529148A/en
Application granted granted Critical
Publication of CN112529148B publication Critical patent/CN112529148B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides an intelligent QoS inference method based on a graph neural network, which is used for solving the problem that the existing neural network cannot adapt to the dynamic adjustment of a network structure. The method comprises the following steps: constructing a graph structure of the GNN according to the network topology structure, and mapping state information in the network into characteristic values of nodes and edges in the graph structure; acquiring state information of equipment in a network topology structure to form a data set, inputting the acquired data set into the GNN model established in the first step for training, and storing optimal node and side neural network parameters to obtain a GNN inference model; and inputting the acquired state data in real-time equipment of the real network into the GNN inference model to realize QoS inference in the current state. The method has the advantages of simple modeling, small calculated amount, higher accuracy and better generalization inference capability, solves the problem that the existing intelligent QoS inference method needs to be retrained when facing a new network topology, and has extremely high practicability in a real network.

Description

Intelligent QoS inference method based on graph neural network
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an intelligent QoS inference method based on a graph neural network.
Background
In the field of network application, such as local area networks, metropolitan area networks, data centers, smart power grids and the like, calculating the QoS (time delay and jitter) according to the current network state (such as topology, routing, traffic matrix and the like) has important application value, and can help network operation and maintenance personnel to plan the network more efficiently and reasonably and improve the service quality of the network.
The traditional methods are mainly of two types: 1) the method is calculated based on a mathematical model of queuing theory, and the interference of random factors is difficult to consider, so the calculation accuracy of the QoS (quality of service) in a real network is low; 2) the method of simulating the calculation by using a network simulator, such as NS3, Omnet + + and the like, needs to perform detailed configuration on parameters of each device in the network, and has the defects of complex use, long calculation time and the like.
With the rise of artificial intelligence technology, many researches focus on reasoning QoS by AI (artificial intelligence) technology, such as DGN (creating countermeasure network), RNN (recurrent neural network), etc., and input data x of the neural network is a Tensor, which can be understood as a multidimensional array, which needs to maintain a fixed size in the training and reasoning process. The change in the network structure will change the dimensionality of the acquired data x, which is no longer suitable for the neural network. Therefore, these neural networks can only input data of a specified dimension, and cannot adapt to the dynamic adjustment of the network structure, that is: the trained neural network is no longer suitable when the network topology changes slightly (for example, links are disconnected, a new host is added, and the like), and new data needs to be collected for retraining. The serious drawbacks of the above-described intelligent techniques make it difficult to commercially deploy AI techniques in a network.
Disclosure of Invention
Aiming at the technical problem that the existing neural network cannot adapt to the dynamic adjustment of a network structure, the invention provides an intelligent QoS inference method based on a graph neural network, which infers QoS based on the Graph Neural Network (GNN), and solves the problem that the existing artificial intelligence technology does not have generalization inference capability in the inference network QoS, thereby promoting the commercial deployment of the artificial intelligence technology in the network.
In order to achieve the purpose, the technical scheme of the invention is realized as follows: an intelligent QoS inference method based on a graph neural network comprises the following steps:
step one, establishing a GNN model: constructing a graph structure of the GNN according to the network topology structure, and mapping state information in the network into characteristic values of nodes and edges in the graph structure;
step two, training a GNN model: acquiring state information of equipment in a network topology structure to form a data set, inputting the acquired data set into the GNN model established in the first step for training, and storing optimal node and side neural network parameters to obtain a GNN inference model;
step three, reasoning: and inputting the acquired state data in real-time equipment of the real network into the GNN inference model to realize QoS inference in the current state.
The nodes in the GNN correspond to the devices in the network topology one to one, the directed edges of the GNN correspond to the links in the network topology, and one link in the network topology corresponds to two edges in the GNN.
The GNN organizes the data transmission of two neural networks by a multilateral directed graph structure, and the data transmission is respectively phiv、φeWherein phi isvCorresponding to the neural network function on the nodes in the directed graph, phieCorresponding to the neural network function on the edge of the directed graph.
The method for mapping the state information in the network into the characteristic values of the nodes and edges in the graph structure in the first step comprises the following steps: mapping state information in the network into characteristic values of nodes and edges in the GNN; the data x is input of GNN, and the characteristic value of a GNN node in the data x is the characteristic information of the equipment corresponding to the node; the characteristic value of the GNN side in the data x is the information of the corresponding link; the output data of the GNN, i.e. the label y, is the eigenvalue v of the GNN nodei', the QoS of the network.
The characteristic information of the equipment corresponding to the node is the characteristic information of a switch, a router or a computer, and includes but is not limited to switching capacity, flow information and routing information; the information of the link includes, but is not limited to, a transmission rate of the link and a congestion state of a port to which the link is connected; the QoS of the network comprises time delay and jitter.
For a node in the GNN, assuming that the switching capability of a device is spps, the incoming and outgoing traffic information is finputMbps and foutpuMbps, the device pairThe input characteristic value of the corresponding node i is (s, f)inupt,foutput) Is denoted by vi(ii) a The output characteristic value is the data value of the time delay or jitter corresponding to the node and is recorded as
Figure BDA0002790069560000021
Outputting the characteristic value
Figure BDA0002790069560000022
Including a theoretical value v calculated from the GNNi' and the actual values collected from the network
Figure BDA0002790069560000023
For an edge in GNN, assuming that the transmission rate of a link is cMbps, the average length of a queue in a connected port is l, and the packet loss rate is d, the input eigenvalue of the edge corresponding to the link is (c, l, d), and is denoted as ek
The current value of each node, i.e. the input characteristic value (s, f)inupt,foutput) Is v isiCurrent value of each edge is ei(ii) a In one round of the loop calculation, the edge calculation is:
Figure BDA0002790069560000024
wherein the content of the first and second substances,
Figure BDA0002790069560000025
is the value of the node to which the directed edge start point connects,
Figure BDA0002790069560000026
is the value of the node connected with the directed edge terminal point; the calculations on a node are:
Figure BDA0002790069560000027
wherein, neighbor (v)i) Representation and node viA connected edge; data (e) is input for each node and each edge0,e1,...),(v0,v1,..), and the output data (e 'can be obtained by repeating the calculation for a plurality of times'0,e′1,...),(v'0,v′1,...);(v'0,v′1,..) information indicating QoS;
by calculating the theoretical value (v'0,v′1,..) and actual values
Figure BDA0002790069560000028
Error between
Figure BDA0002790069560000029
Gradient optimization of phi according to a back-feed algorithmvAnd phieThe parameters in (1), namely the training of the neural network is completed,
Figure BDA00027900695600000210
is the actual value collected from the network.
Inputting the acquired data into the constructed GNN model for training, and adjusting the neural network types of nodes and edges in the GNN and the hyper-parameter values thereof in the training process to achieve ideal reasoning precision; the inference precision is 1- | true value-inference value | ÷ true value; in QoS inference, the inference accuracy needs to be above 90%.
Compared with the prior art, the invention has the beneficial effects that: compared with the conventional QoS inference method, such as queuing theory, simulation and the like, the method has the advantages of simple modeling, small calculated amount and higher accuracy, and is an intelligent QoS inference method; compared with the existing intelligent QoS inference method, the intelligent QoS inference method is based on the graph neural network GNN, has better generalization inference capability, namely the trained inference model can be suitable for networks with different topologies, solves the problem that the existing intelligent QoS inference method needs to be retrained when facing a new network topology, and has excellent practicability in a real network (the topology of the real network is usually dynamically changed).
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of the principle of the present invention.
Fig. 2 is a schematic diagram of GNN inference in the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
As shown in fig. 1, an intelligent QoS inference method based on a graph neural network belongs to conventional supervised learning, focuses on the definition of GNN graph structure and the selection of data x and label y, and includes the following steps:
step one, establishing a GNN model: and constructing a graph structure of the GNN according to the network topology structure, and mapping the state information in the network into characteristic values of nodes and edges in the graph structure.
Nodes in the GNNs correspond to devices in the network topology one-to-one, edges of the GNNs are directed edges corresponding to links in the network, and links in the network are generally bi-directional, so one link in the network can correspond to two edges in the GNNs. CNN and RNN can be considered as a function f, where the input data x is output with a label y ═ f (x) through calculation, the function f corresponds to a set of neural network parameters phi, and the training is to solve the set of parameters phi. The neural network is composed of a large number of neurons, parameters of one neuron comprise weight values w, bias values b and the like, and the parameters of the neural network are a set of parameters of all the neurons.
The essence of GNN is a neural network (which also requires training before reasoning can be done), but unlike CNN and RNN, it organizes two neural networks in a polygonal directed graph structure, shown in FIG. 2, with phi respectivelyv、φeWhich isMiddle diameter phivCorresponding to the neural network function on the nodes in the directed graph, phieThe invention relates to a fully-connected neural network CNN corresponding to the neural network functions on the edges of the graph, and the neural network functions (phi)v、φe) The term "CNN" or "RNN" as used herein refers to CNN in particular. Although the graph structure has a plurality of nodes, the graph structure uses a neural network phivAre the same, as are the neural network parameters of the edges.
In the aspect of defining the eigenvalue, for the GNN graph with a well-defined structure, the selection of input data and output data, that is, data x and label y, is mainly used, that is, state information in the network is mapped to the eigenvalue of the node and edge in the GNN. The data x is input to the GNN (a neural network is a kind of calculation, inputs a set of data, and outputs a corresponding set of data), and the characteristic value of the GNN node in the data x should cover the characteristic information of the corresponding device (characteristic information of a device in the network, such as a switch, a router, and a computer), including but not limited to switching capability (how many packets can be switched per second), traffic information (how much traffic passes through the device per second in the current state), routing information, and the like. The feature information is collected by relevant protocols and technologies in the network, such as netcfg, INT, OpenFlow, etc. The GNN edge eigenvalue in data x shall cover information of its corresponding link, where the link is a network wire between connected devices, including but not limited to the transmission rate of the link (how much traffic can be transmitted per second) and the congestion status of the port to which the link is connected (the network wire is connected to a port of a router, and there is a queue on the port to store packets waiting for transmission on the network wire. The output data, i.e., the label y, is the output of the GNN, the characteristic value v 'of the GNN node'i(A feature value is a set of data characterizing a feature, i.e., a set of data of particular interest and value) is the network's QoS (Quality of Service), including latency, jitter.
For nodes in the GNN, the switching capabilities of a certain device are assumedSpps, and the incoming and outgoing traffic information is finputMbps and foutpuMbps, the input characteristic value of the corresponding node i of the device is (s, f)inupt,foutput) It is denoted as vi. The output characteristic value is the data value of the corresponding time delay or jitter and is recorded as
Figure BDA0002790069560000041
Figure BDA0002790069560000042
The method is divided into two types: 1) theoretical value calculated from GNN, denoted as vi'; 2) actual values collected from the network, noted
Figure BDA0002790069560000043
For an edge in GNN, assuming that the transmission rate of a link is cMbps, the average length of a queue in a connected port is l, and the packet loss rate is d, the input eigenvalue of the edge corresponding to the link is (c, l, d), and is denoted as ek
Step two, training a GNN model: and (3) acquiring state information of equipment in the network topology structure to form a data set, inputting the acquired data set into the GNN model established in the first step for training, and storing optimal node and side neural network parameters to obtain the GNN inference model.
After the eigenvalues are well defined in the first step, training calculation can be performed according to the GNN calculation process, as shown in fig. 2, where E is a set of input eigenvalues of all edges of GNN, and E' is a set of output eigenvalues thereof; v is the set of input eigenvalues of all nodes of the GNN and V' is the set of output eigenvalues thereof. N is a radical ofeRefers to the number of edges in the GNN graph, NnRefers to the number of nodes in the GNN map.
The current value of each node, i.e. the input characteristic value (s, f)inupt,foutput) Is v isiThe current value (which may be the initial input value) of each edge is ei. In one round of the loop calculation, the edge calculation is:
Figure BDA0002790069560000051
wherein the content of the first and second substances,
Figure BDA0002790069560000052
is the value of the node to which the directed edge start point connects,
Figure BDA0002790069560000053
is the value of the node connected with the directed edge terminal point; the calculations on a node are:
Figure BDA0002790069560000054
wherein, neighbor (v)i) Representation and node viThe edges of the connection. According to the above calculation process, data (e) is input for each node and each edge, respectively0,e1,...),(v0,v1,..), and the output data (e 'can be obtained by repeating the calculation for a plurality of times'0,e′1,...),(v'0,v′1,...)。
Only the output characteristic value of a node, i.e., (v'0,v′1,..) indicating QoS information. By calculating the theoretical value (v'0,v′1,..) and actual values
Figure BDA0002790069560000055
Error between
Figure BDA0002790069560000056
Gradient optimization of phi according to a back-feed algorithmvAnd phieThe training of the neural network is completed.
For input data and output data, relevant data is collected in the real network. The acquired data contain various network topological structures as much as possible, so that the GNN model can learn better generalization capability. The network topology is the shape of the network, for example, the network has several switches, several routers, several computers. The network topology changes and the topology of the corresponding GNN graph changes accordingly. Even if the size of the acquired data changes due to the change of the network topology, the corresponding change of the GNN can still adapt to the change of the data, so that the data can still be processed.
The acquired data is input into the constructed GNN model for training, and the neural network types of the nodes and edges in the GNN and the hyper-parameters thereof can be properly adjusted in the training process so as to achieve ideal reasoning precision. Currently, the adjustment of the hyper-parameter is mostly manually adjusted based on human experience, and repeated attempts are made to find out a suitable hyper-parameter value. Even though they have been determined to be fully connected neural networks CNN, the CNN is sized, and the size of the size is the setting of the hyper-parameters.
And (3) acquiring state information of equipment in the network in reality, namely the structure and the characteristic value of the GNN model defined in the step one, and training the established GNN model in a large amount until the inference precision during training meets the requirements of a user. The inference precision is 1- | true value-inference value | ÷ true value. The precision is set by different requirements of users, such as a face recognition system, and different recognition precisions exist in different application occasions, but in the aspect of QoS reasoning, the reasoning precision generally needs to be more than 90%. When the precision of the GNN reaches a certain precision after training, the satisfaction of the use condition means that the current inference precision is larger than a certain value. Data x is collected from a real network and input to the GNN model, and the output of the GNN model can be QoS inference meeting certain precision.
Step three, reasoning: and inputting the acquired state data in real-time equipment of the real network into the GNN inference model to realize QoS inference in the current state.
And (4) the GNN reasoning model trained in the step two has reasoning capability, and QoS reasoning under the current state can be completed when new state data acquired from a real network is input.
From the working principle of GNN, if given by φv、φeThe computation can be done regardless of how many nodes and edges (corresponding to devices and links in the network) are in the graph. That is, the GNN neural network can complete v 'even if the network topology changes (edges or nodes are added or subtracted from the graph)'iOf (i.e. inference of QoS in the network).
While the conventional neural networks CNN or RNN do not work, their input data is of a specified size (i.e. the dimension of tensor), and their trained neural networks can only input data of a fixed size. When the network structure changes, the size of the collected data changes (for example, if there is one less device, the data corresponding to the device is missing), and the CNN or RNN cannot be correctly inferred. However, the GNN is possible, and when the network structure changes, the graph structure of the GNN may be adjusted accordingly (e.g., one less device, corresponding node in the graph is deleted). The adjustment of the graph structure does not affect the neural network in the edges and nodes, so that the QoS of the network can still be correctly inferred without retraining.
The invention uses GNN to model the network topology structure and completes QoS reasoning, and has generalization reasoning ability, namely: under the training of data acquisition in a certain network, reasoning can be carried out on other networks with different topologies.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (7)

1. An intelligent QoS inference method based on a graph neural network is characterized by comprising the following steps:
step one, establishing a GNN model: constructing a graph structure of the GNN according to the network topology structure, and mapping state information in the network into characteristic values of nodes and edges in the graph structure;
step two, training a GNN model: acquiring state information of equipment in a network topology structure to form a data set, inputting the acquired data set into the GNN model established in the first step for training, and storing optimal node and side neural network parameters to obtain a GNN inference model;
step three, reasoning: inputting the acquired state data in real-time equipment of the real network into a GNN inference model to realize QoS inference under the current state;
the current value of each node, i.e. the input characteristic value (s, f)inupt,foutput) Is v isiCurrent value of each edge is ei(ii) a In one round of the loop calculation, the edge calculation is:
Figure FDA0003386880180000011
wherein the content of the first and second substances,
Figure FDA0003386880180000012
is the value of the node to which the start point of the directed edge is connected,
Figure FDA0003386880180000013
is the value of the node connected with the directed edge terminal; the calculations on a node are:
Figure FDA0003386880180000014
wherein, neighbor (v)i) Representation and node viA connected edge; data (e) is input for each node and each edge0,e1,...),(v0,v1,..), and calculating the output data (e ') in a plurality of cycles'0,e′1,...),(v′0,v′1,...);(v′0,v′1,..) information indicating QoS;
where s is exchange capacity, finputAs incoming flow information, foutputIs the incoming traffic information; phi is avCorresponding to the neural network function on the nodes in the directed graph, phieCorresponding to the neural network function on the edge in the directed graph;
by calculating the theoretical value (v'0,v′1,..) and actual values
Figure FDA0003386880180000015
Error of element value in (1)
Figure FDA0003386880180000016
Gradient optimization of phi according to a back-feed algorithmvAnd phieThe parameters in (3) to complete the training of the network,
Figure FDA0003386880180000017
for actual values taken from the network, NnRefers to the number of nodes in the GNN.
2. The intelligent QoS inference method based on graph neural network of claim 1, wherein nodes in GNN correspond to devices in network topology one-to-one, directional edges of GNN correspond to links in network topology, and one link in network topology corresponds to two edges in GNN.
3. The intelligent QoS inference method based on graph neural network as claimed in claim 1 or 2, wherein the GNN organizes data transfer of two neural networks in a multilateral directed graph structure, respectively, phiv、φe
4. The intelligent QoS inference method based on graph neural network of claim 3, wherein the method for mapping the state information in the network to the feature values of the nodes and edges in the graph structure in the first step is as follows: mapping state information in the network into characteristic values of nodes and edges in the GNN; the data x is input of GNN, and the characteristic value of a GNN node in the data x is the characteristic information of the equipment corresponding to the node; the characteristic value of the GNN side in the data x is the information of the corresponding link; the output data of GNN, namely the label y, is the characteristic value v 'of the GNN node'iI.e. the QoS of the network.
5. The intelligent QoS inference method based on graph neural network of claim 4, wherein the characteristic information of the device corresponding to the node is characteristic information of a switch, a router or a computer, including but not limited to switching capability, traffic information, routing information; the information of the link includes, but is not limited to, a transmission rate of the link and a congestion state of a port to which the link is connected; the QoS of the network comprises time delay and jitter.
6. The intelligent QoS inference method based on graph neural network of claim 5, wherein for a node in GNN, assuming that the switching capability of a certain device is spps, the flow information of inflow and leaving is finputMbps and foutpuMbps, the input characteristic value of the corresponding node i of the device is (s, f)inupt,foutput) Is denoted by vi(ii) a The output characteristic value is a data value of time delay or jitter corresponding to the node and is recorded as v'i
For an edge in GNN, assuming that the transmission rate of a link is cMbps, the average length of a queue in a connected port is l, and the packet loss rate is d, the input eigenvalue of the edge corresponding to the link is (c, l, d), and is denoted as ek
7. The intelligent QoS inference method based on graph neural network of claim 1 or 6, characterized in that, the collected data is inputted into the constructed GNN model for training, and in the training process, the neural network types of the nodes and edges in the GNN and the hyper-parameter values thereof are adjusted to achieve ideal inference precision; the inference precision is 1- | true value-inference value | ÷ true value; in QoS inference, the inference accuracy needs to be above 90%.
CN202011311810.0A 2020-11-20 2020-11-20 Intelligent QoS inference method based on graph neural network Active CN112529148B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011311810.0A CN112529148B (en) 2020-11-20 2020-11-20 Intelligent QoS inference method based on graph neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011311810.0A CN112529148B (en) 2020-11-20 2020-11-20 Intelligent QoS inference method based on graph neural network

Publications (2)

Publication Number Publication Date
CN112529148A CN112529148A (en) 2021-03-19
CN112529148B true CN112529148B (en) 2022-02-22

Family

ID=74981936

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011311810.0A Active CN112529148B (en) 2020-11-20 2020-11-20 Intelligent QoS inference method based on graph neural network

Country Status (1)

Country Link
CN (1) CN112529148B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114374655A (en) * 2021-12-22 2022-04-19 中国电信股份有限公司 Network flow characteristic extraction method, device, equipment and storage medium
CN117411806B (en) * 2023-12-13 2024-03-08 国网浙江省电力有限公司信息通信分公司 Power communication network performance evaluation method, system, equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107392129A (en) * 2017-07-13 2017-11-24 浙江捷尚视觉科技股份有限公司 Face retrieval method and system based on Softmax
CN108875030B (en) * 2018-06-25 2021-05-18 山东大学 Context uncertainty eliminating system based on hierarchical comprehensive quality index QoX and working method thereof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GraphMF: QoS Prediction for Large Scale Blockchain Service Selection;Yuhui Li;《2020 3rd International Conference on Smart BlockChain》;20201015;第167-172页 *
Web Service QoS Prediction via Collaborative Filtering: A Survey;Zibin Zheng;《 IEEE Transactions on Services Computing》;20200518;第1-18页 *

Also Published As

Publication number Publication date
CN112529148A (en) 2021-03-19

Similar Documents

Publication Publication Date Title
Yu et al. Intelligent edge: Leveraging deep imitation learning for mobile edge computation offloading
CN112529148B (en) Intelligent QoS inference method based on graph neural network
CN112532530B (en) Method and device for adjusting congestion notification information
Zhuang et al. Toward greater intelligence in route planning: A graph-aware deep learning approach
CN112422443B (en) Adaptive control method, storage medium, equipment and system of congestion algorithm
Lei et al. Congestion control in SDN-based networks via multi-task deep reinforcement learning
CN108696453B (en) Lightweight SDN service flow notification method and system
CN110414718A (en) A kind of distribution network reliability index optimization method under deep learning
WO2023109699A1 (en) Multi-agent communication learning method
CN113395207A (en) Deep reinforcement learning-based route optimization framework and method under SDN framework
Tang et al. Flow splitter: A deep reinforcement learning-based flow scheduler for hybrid optical-electrical data center network
Mai et al. Packet routing with graph attention multi-agent reinforcement learning
Zhuang et al. Graph-aware deep learning based intelligent routing strategy
Zhang et al. DSOQR: Deep reinforcement learning for online QoS routing in SDN-based networks
Liu et al. BULB: lightweight and automated load balancing for fast datacenter networks
Guo et al. A novel cluster-head selection algorithm based on hybrid genetic optimization for wireless sensor networks
Li Computer network connection enhancement optimization algorithm based on convolutional neural network
WO2024037136A1 (en) Graph structure feature-based routing optimization method and system
CN115396366B (en) Distributed intelligent routing method based on graph attention network
CN111488981A (en) Method for selecting sparse threshold of depth network parameter based on Gaussian distribution estimation
CN112163673B (en) Population routing method for large-scale brain-like computing network
Bhowmik et al. An improved PSO based fuzzy clustering algorithm in WSNs
Baburaj et al. Finite-time sliding mode flow control design via reduced model for a connection-oriented communication network
Yu et al. Collective learning and information diffusion for efficient emergence of social norms
Ng Routing in wireless sensor network based on soft computing technique

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant