CN112801261A - Power data stream transmission time reasoning method based on graph neural network - Google Patents

Power data stream transmission time reasoning method based on graph neural network Download PDF

Info

Publication number
CN112801261A
CN112801261A CN202110002217.6A CN202110002217A CN112801261A CN 112801261 A CN112801261 A CN 112801261A CN 202110002217 A CN202110002217 A CN 202110002217A CN 112801261 A CN112801261 A CN 112801261A
Authority
CN
China
Prior art keywords
graph
data stream
gnn
data
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110002217.6A
Other languages
Chinese (zh)
Inventor
黄万伟
张建伟
梁辉
孙海燕
王博
李玉华
陈明
楚杨阳
袁博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou University of Light Industry
Original Assignee
Zhengzhou University of Light Industry
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou University of Light Industry filed Critical Zhengzhou University of Light Industry
Priority to CN202110002217.6A priority Critical patent/CN112801261A/en
Publication of CN112801261A publication Critical patent/CN112801261A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Business, Economics & Management (AREA)
  • Economics (AREA)
  • Neurology (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides a power data stream transmission time reasoning method based on a graph neural network, which is used for solving the problems that the transmission time of each data stream cannot be accurately deduced by the existing transmission time reasoning and the running time is long. The method comprises the following steps: establishing a GNN model: establishing a topological structure of the GNN model according to the network structure and the routing information of the power data center, and mapping data flow information of the power data center to characteristic values of an attribute map in the GNN model; and (3) GNN model training: training the GNN model by using the acquired data set through supervised learning to obtain a GNN inference model; and (3) reasoning of transmission time: and mapping the test data acquired in the power data center into a characteristic value of the GNN model, inputting the characteristic value into the GNN reasoning model, and reasoning to obtain the transmission time of the data stream. The method can rapidly and accurately deduce the transmission time of the data stream, and is beneficial to decision of data stream transmission and scheduling, thereby improving the operation efficiency of the network in the power data center.

Description

Power data stream transmission time reasoning method based on graph neural network
Technical Field
The invention relates to the technical field of power data centers, in particular to a power data stream transmission time reasoning method based on a graph neural network.
Background
The power data center deploys a large number of power services, and electricity charging, power government affairs, real-time transaction and the like are indispensable important components for normal operation of the current power system. How to efficiently use the calculation, storage and transmission resources of the power data center is an important problem, which not only relates to the construction benefit in the power data, but also affects the operation efficiency of the power service. The transmission resource of the power data center specifically refers to a network resource thereof, that is, a data center network in which a large number of servers of the power data center are interconnected, and is also the background technology to which the present invention is directed.
An electric power service usually needs a plurality of servers to operate cooperatively, and the cooperative operation between the servers inevitably needs interactive data, which is called data flow. The data stream needs to be transmitted by network resources, and it takes a certain time, which is called the transmission time of the data stream. The transmission time of the data stream is an important reference index for efficient scheduling of the power service, similar to the fact that a certain employee needs to handle services at multiple places in a city in one day, if the time spent on the road in any time period can be accurately presumed, a route can be reasonably arranged in advance, and work can be completed more efficiently. However, when a large amount of power services are running in the power data center, the transmission of a large number of data streams may affect each other, thereby making their transmission time difficult to determine.
The current reasoning method for data stream transmission time mainly comprises two types: 1) a macroscopic mathematical model is established based on a queuing theory, and a corresponding mathematical calculation formula is further deduced, but the method can only calculate an average value in a statistical sense, cannot accurately deduce the transmission time of each data stream, and cannot meet the index required for reference in the efficient scheduling of the power service; 2) the transmission of data streams in the power data center network, such as NS3, Omnet + + and the like, is simulated through simulation software, and the transmission time of the data streams is obtained. However, this method has the disadvantages of complicated parameter configuration (how the states in the real network are mapped to the parameters in the simulation software), too long run time (even under the general simulation precision, the run time in the simulation software is usually hundreds of times of the actual run time of the network), and thus the requirement of real-time scheduling cannot be met.
Most AI technologies (e.g., DGN, DRL, DBA, etc.) applied to networks have a common defect-without generalization capability, it is difficult to adapt to the dynamically variable characteristics of the network, and the trained Neural Network (NN) is no longer applicable when the network topology changes slightly (e.g., link is broken, a new host is added, etc.), and new data needs to be collected for retraining. This drawback is exacerbated by the fact that the topology of the network is typically dynamically changing, making commercial deployment of AI technologies in the network difficult.
Graph Neural Networks (GNNs) are a generalization and extension of various methods of Graph-based Neural networks, supporting relational reasoning and combinatorial generalization. Therefore, the GNN can model and represent complex relations among topology, routes and traffic in the network, and can popularize the trained network parameters to any topology, routing scheme and variable traffic intensity.
Disclosure of Invention
Aiming at the technical problems that the transmission time of each data stream cannot be accurately inferred by the conventional inference method for the transmission time of the data stream and the operation time is long, the invention provides the inference method for the transmission time of the data stream based on the Graph Neural Network, which is used for inferring the transmission time of the data stream in the power data center by means of the current latest artificial intelligence technology, namely the Graph Neural Network (GNN), and has the obvious advantages of accurate result, quick calculation and simplicity in use.
In order to achieve the purpose, the technical scheme of the invention is realized as follows: a power data stream transmission time reasoning method based on a graph neural network comprises the following steps:
step one, establishing a GNN model: establishing a topological structure of a GNN model according to a network structure and routing information of the power data center, and mapping data flow information of a network of the power data center to a characteristic value of an attribute map in the GNN model;
step two, GNN model training: training the GNN model established in the first step through supervised learning by utilizing a data set acquired by a power data center to obtain a GNN reasoning model;
step three, reasoning transmission time: and mapping the test data acquired in the power data center into the characteristic value of the GNN model according to the step one, inputting the characteristic value into the GNN inference model obtained in the step two, and inferring to obtain the transmission time of the data stream.
The attribute graph refers to that nodes and edges of a graph have certain characteristics and are represented by tensor data with fixed dimensionality; the attribute graph in the GNN model comprises an input graph and an output graph, the input graph and the output graph have the same structure, and the input graph and the output graph are defined by objects represented by nodes and edges; each flow in the power data center is mapped to a node in an input graph or an output graph, and a directed edge between the nodes represents that a transmission path of a data flow corresponding to a source node and a transmission path of a data flow corresponding to a destination node have an intersecting link, that is, if two transmission paths of the data flows have an intersecting link, a pair of directed edges exists between the nodes corresponding to the two transmission paths.
The said transfusionThe characteristic value of the input graph is marked as GxThe feature value of the input graph node is a 5-tuple:<IPs,IPd,Size,Timestart,TOS>wherein, IPsIP Address, IP, indicating the server sending the data streamdIP address of server for receiving data stream, Size of data stream, TimestartIndicating a time point at which the data stream starts to be transmitted, and the TOS indicating a service type of the data stream; the feature value of the input graph edge is a 2-tuple:<PROT,BW>wherein PROT and BW denote the protocol and bandwidth of the corresponding link, respectively;
the characteristic value of the output graph is marked as GyThe characteristic value of the output graph node is a 1-tuple<FCT>I.e. the transmission time of the data stream.
The method for preprocessing the data flow information in the power data center comprises the following steps: the characteristic value IP address is mapped to the fixed position of the characteristic value IP address, and the fixed position is composed of 2 tuples<Ps,Ph>Is represented by the formula, wherein PhIs the number of the host, PsIs the number of the access layer switch to which the host is connected; the feature value Size is in bytes; characteristic value Time in Time MtAs a starting point, MtIs the time at which the data packet begins to be sent; TOS is defined using differentiated services code point DSCP, expressed as an integer between 0 and 63; the characteristic value PROT is a data link layer protocol, the Ethernet is expressed as 0, and the PPP is expressed as 1; the characteristic value BW is link bandwidth and is in bps; the characteristic value FCT is the completion time of the data flow in ms.
The frame for training the GNN model in the second step comprises three GN blocks, and the graph topology of the GN blocks is the same but has different neural network parameters; the three GN blocks comprise a coding block Encode, a Core block Core and a decoding block Decode, wherein the input end of the coding block is connected with input training data, the output end of the coding block is connected with the input end of the Core block, the output end of the Core block is connected with the input end of the decoding block, the output end of the decoding block obtains a training result, and Hidden information Hidden (t +1) output after t +1 iterations of the Core block takes Hidden information Hidden (t) of t iterations and the output of the coding block as input; the coding block Encode performs 1 time of calculation of characteristic values of edges and nodes in the attribute graph, and is equivalent to coding; the Core block executes N rounds of message passing processing, and the input of the Core block is the output of the coding block Encode and the output in the previous step of the Core block; the decoding block Decode performs 1 calculation of the edge and node feature values in the attribute map, equivalent to decoding of data.
The method for training by supervising learning in the step two comprises the following steps:
(1) initialization: taking a batch of training data from a data set G collected by the power data center when the iteration number t is 1 (G)x,Gy),GxSet of feature values for input maps, GyA set of eigenvalues for the output graph; inputting the characteristic value G of the graphxThe input coding block obtains Hidden (0) ═ Encode (G)x),Encode(Gx) Represents the characteristic value GxInputting the code block Encode to calculate;
(2) iteration: number of iterations t<Inputting Hidden (0) and Hidden (t) into the Core block Core to obtain Hidden (t +1) ═ Core (Hidden (0) and Hidden (t)); inputting Hidden (t +1) into a decoding block Decode to obtain output: o (t) ═ Decode (Hidden (t + 1)); adding the output O (t) to the set Gy_In (i) Gy_.append(O(t));
The Core () represents a calculation function of the Core block, the Decode () represents a calculation function of the Decode block Decode, N is the maximum iteration number, Hidden (t +1) represents Hidden information of t +1 iterations, and Hidden (t) represents Hidden information of t iterations; append means add, that is, output O (t) is added to set Gy_Performing the following steps;
(3) calculating the actual output Gy_And true output GyTo obtain a set of losses:
Figure BDA0002881945340000031
wherein a and b represent a set G, respectivelyy、Gy_X and y represent elements in sets a and b respectively;
(4) calculate the average of the loss set loss _ set:
Figure BDA0002881945340000032
optimizing neural network parameters in the GNN model through loss; the iteration time t is t + 1;
(5) and (3) if the iteration time t is equal to N or the training precision reaches more than 95%, namely the loss is less than 5%, returning to the step (2).
The method for reasoning transmission time of power data stream based on graph neural network as claimed in claim 5, wherein the reasoning of transmission time in step three is to collect data stream information in power data center, and map the data stream information to characteristic value G of input graph according to step one GNN modeling methodxThe characteristic value GxInputting the data into a trained GNN inference model for calculation to obtain an inference value Gy_p=Gy_[-1]I.e. array Gy_The last value of (1); and the accuracy of the inference, accuracycacy, 1-loss _ set-1]/Gy,loss_set[-1]Representing the last value in the set loss _ set.
Compared with the prior art, the invention has the beneficial effects that: the method can rapidly and accurately reason out the transmission time of the data stream, and is beneficial to decision of data stream transmission and scheduling, so that the operation efficiency of the network in the power data center is improved. The invention adopts a modeling method of the graph neural network, deduces the transmission time of the data flow according to the transmission mode, the routing information and the network topology, and has the following remarkable advantages: 1) the graph neural network is a latest deep neural network in the current artificial intelligence, and can automatically fit the quantitative relation between the input quantity and the output quantity, so that the graph neural network has the characteristic of simple modeling; 2) the calculation process of the graph neural network is a rapid scientific calculation, and compared with a simulation and simulation method, the calculation amount between the input quantity and the output quantity is less, so that the advantage of rapid calculation is achieved; 3) the graph neural network can adapt to the dynamic changes of network topology and routing information and has better generalization capability, so the graph neural network has better practical value and can accurately carry out reasoning.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of a network architecture of a power data center according to the present invention.
Fig. 2 is a topology diagram of the GNN model of the present invention.
Fig. 3 is a block diagram of GNN training according to the present invention.
Fig. 4 is a block diagram of an example of application of the GNN model of the present invention.
Fig. 5 is an experimental data diagram of the network optimization effect achieved by the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
A power data stream transmission time reasoning method based on a graph neural network comprises the following steps:
step one, establishing a GNN model: and establishing a topological structure of the GNN model according to the network structure and the routing information of the power data center, and mapping the network information and the data flow information of the power data center into characteristic values of an attribute map in the GNN model.
Establishing the GNN model refers to mapping network information (including switching device parameters and routing table entries) and data flow information into topology and characteristic values of an attribute map in the GNN. As shown in fig. 1, a network of the power data center is also a standard three-layer tree structure, and includes, from top to bottom, a core layer switch, a convergence layer switch, an access layer switch, and a server, where a power service deployed in the server needs to transmit a data stream through the access layer switch, the convergence layer switch, and the core layer switch.
The GNN takes the attribute map as an input, and outputs another map with different attributes after a series of neural network calculations. The attribute graph refers to a graph in which nodes and edges have certain characteristics and are expressed by tensor data of fixed dimensions. Therefore, the GNN model includes two parts, topology and eigenvalues:
1) topological structure
The input graph and the output graph in the GNN model have the same structure, and objects represented by nodes and edges can be uniformly defined. First, each flow in the data center is mapped as a node in the graph. The directed edge between the nodes indicates that the transmission path of the data stream corresponding to the source node and the transmission path of the data stream corresponding to the destination node have an intersecting link (a network cable connected between the switches), that is, if the transmission paths of two data streams have an intersecting link, a pair of directed edges exists between the nodes corresponding to the two data streams.
For example, in FIG. 1, assume that there are three flows in the data center network, i.e., f0:h0→h2,f1:h0→h8,f2:h9→h3Their routing paths are indicated by dashed arrows. The GNN model graph corresponding to the network is shown in fig. 2, where three nodes represent the three data streams. Node f0Path and node f1Has two intersecting links, so that the slave node f0To node f1With two edges, slave node f1To node f0The same is true. It should be noted that although node f1And node f2Partially transmitted on the same physical link, but their transmission paths represented by unidirectional links are disjoint, and therefore at node f1And node f2There is no edge in between.
2) Characteristic value
The characteristic values of the nodes and edges in the graph are a series of data and should be related to the transmission time of the data stream. Each flow is taken as a basic object, represented as a node in the GNN. The characteristic value of the input graph is marked as GxFeatures of the input graph nodesThe eigenvalues are 5-tuples:<IPs,IPd,Size,Timestart,TOS>wherein, IPsIP Address, IP, indicating the server sending the data streamdIP address of server for receiving data flow, Size represents Size of data flow, i.e. amount of data to be transmitted by flow, TimestartIndicating a time point at which the data stream starts to be transmitted, and the TOS indicating a service type of the data stream (with respect to a priority of the data stream transmission); wherein, TimestartAnd TOS reflect the flow schedule, which determines the edge and characteristics of the edge in GNN, specifically if there is an intersection of the transmission paths of two flows, there is an edge between their corresponding nodes, and the characteristics of these edges are the state of their intersecting links. The GNN model functions to calculate the completion time (FCT) of each flow, i.e., the transit time, which is the final output of each node. Then, even if the network topology, traffic matrix or routing strategy changes, the GNN model can still represent their state. When the GNN model is generalized to different network states containing more streams or larger topologies, its accuracy needs to be verified. Based on the GNN model, the completion time under any routing or scheduling scheme can be evaluated and a better scheme selected to optimize traffic.
The feature value of the input graph edge is a 2-tuple:<PROT,BW>where PROT and BW denote the protocol and bandwidth, respectively, of the corresponding link. And the characteristic value of the output graph is marked as GyThe characteristic value of the output graph node is a 1-tuple<FCT>I.e. the transmission time of the data stream, while the eigenvalues of the edges in the output graph are not used, and are not of interest here.
Since the raw data of these features may randomly change or shift by a certain value, these functions need to be preprocessed so that the GNN model can learn the logical relationship between them more easily, as shown in the following table:
Figure BDA0002881945340000061
step two, GNN model training: and (4) training the GNN model established in the first step by using a data set acquired by the power data center through supervised learning to obtain the GNN inference model.
As shown in fig. 3, the framework for training the GNN model includes three components, namely three GN blocks, whose map topology is identical but with different neural network parameters, and whose output function is the input of the next block. The GN block takes a graph as input, and outputs another graph with different attributes through a series of calculation, wherein the attributes are eigenvalues of nodes and edges and are expressed by tensors with fixed dimensions. The input graph and the output graph have great influence on the training and reasoning of the model.
1) "Encoder" performs 1 calculation of the edge and node feature values in the attribute graph, equivalent to encoding.
2) "Core" performs N rounds of message passing processing. The input of Core is the output of Encoder and the output in the previous step of Core, like Hidden (t +1) in FIG. 3 is calculated from the input Hidden (t) and the output of Encoder.
3) "Decoder" performs 1 calculation of the edge and node feature values in the attribute map, equivalent to decoding of data.
The model is trained by supervised learning, and the pseudo-code is as follows:
Figure BDA0002881945340000071
step three, reasoning transmission time: and mapping the test data acquired in the power data center into training data of the GNN model according to the step one, inputting the training data into the GNN inference model obtained in the step two, and inferring to obtain the transmission time of the data stream.
Lines 12-14 of the pseudo code are the inference part of GNN, and are performed after the training accuracy reaches 95% or more (less than 5% loss in line 10). The inference is to collect relevant data from the network and map the data to G according to the GNN modeling methodxThen inputting the data into the trained GNN for calculation to obtain an array Gy_And the last value of the data is used as an inference value.
The performance of the network under various configurations can be evaluated by means of the GNN model, and a better strategy can be explored to optimize the flow of the power data center. But this GNN needs to be trained to meet the desired prediction accuracy.
An example of an application of the present invention is shown in fig. 4, where the GNN model module corresponds to the GNN model of the present invention, the Trainer module is responsible for collecting state data state and transmission completion time FCT from the network, and then training the GNN model, and the interaction data of the two is the above-mentioned feature value GxArray Gy_Average value loss. When the accuracy of the GNN model module meets the requirements, it can be used in the Optimizer module. The Optimizer module acquires the state data state from the network, then explores the transmission completion time FCT under different network configurations by means of the GNN model module, and issues the optimized configuration to the network, thereby realizing the optimization of the network. For example, a test is performed in a Mininet network environment, and the transmission completion time of data streams under different routing policies is counted, and the test result is shown in fig. 5. From the test results, it can be seen that the optimal routing strategy GNN based on the heuristic algorithm, which is explored by means of the GNN model module, can enable the network to complete the transmission task of the data stream faster, i.e., the network operation efficiency of the data center is higher, compared with the shortest routing priority Baseline routing strategy and the equivalent multipath WCMP routing strategy based on the weight.
The inventor adopts an intelligent method, namely the graph neural network realizes reasoning data flow, a graph neural network model based on the data flow in the power data is established, and the process of training and reasoning is the application of the GNN.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (7)

1. A power data stream transmission time reasoning method based on a graph neural network is characterized by comprising the following steps:
step one, establishing a GNN model: establishing a topological structure of a GNN model according to a network structure and routing information of the power data center, and mapping data flow information of a network of the power data center to a characteristic value of an attribute map in the GNN model;
step two, GNN model training: training the GNN model established in the first step through supervised learning by utilizing a data set acquired by a power data center to obtain a GNN reasoning model;
step three, reasoning transmission time: and mapping the test data acquired in the power data center into the characteristic value of the GNN model according to the step one, inputting the characteristic value into the GNN inference model obtained in the step two, and inferring to obtain the transmission time of the data stream.
2. The electric power data stream transmission time inference method based on the graph neural network as claimed in claim 1, characterized in that the attribute graph is that nodes and edges of a graph have certain characteristics and are represented by tensor data with fixed dimensions; the attribute graph in the GNN model comprises an input graph and an output graph, the input graph and the output graph have the same structure, and the input graph and the output graph are defined by objects represented by nodes and edges; each flow in the power data center is mapped to a node in an input graph or an output graph, and a directed edge between the nodes represents that a transmission path of a data flow corresponding to a source node and a transmission path of a data flow corresponding to a destination node have an intersecting link, that is, if two transmission paths of the data flows have an intersecting link, a pair of directed edges exists between the nodes corresponding to the two transmission paths.
3. The graph neural network-based power data stream transmission time inference method according to claim 2, characterized in that the characteristic value of the input graph is marked as GxThe feature value of the input graph node is a 5-tuple:<IPs,IPd,Size,Timestart,TOS>wherein, IPsIP Address, IP, indicating the server sending the data streamdIP address of server for receiving data stream, Size of data stream, TimestartIndicating the point in time at which the data stream begins to be transmitted, and the TOS indicating the class of service of the data streamMolding; the feature value of the input graph edge is a 2-tuple:<PROT,BW>wherein PROT and BW denote the protocol and bandwidth of the corresponding link, respectively;
the characteristic value of the output graph is marked as GyThe characteristic value of the output graph node is a 1-tuple<FCT>I.e. the transmission time of the data stream.
4. The graph neural network-based power data stream transmission time inference method according to claim 3, wherein the method for preprocessing the data stream information in the power data center is as follows: the characteristic value IP address is mapped to the fixed position of the characteristic value IP address, and the fixed position is composed of 2 tuples<Ps,Ph>Is represented by the formula, wherein PhIs the number of the host, PsIs the number of the access layer switch to which the host is connected; the feature value Size is in bytes; characteristic value Time in Time MtAs a starting point, MtIs the time at which the data packet begins to be sent; TOS is defined using differentiated services code point DSCP, expressed as an integer between 0 and 63; the characteristic value PROT is a data link layer protocol, the Ethernet is expressed as 0, and the PPP is expressed as 1; the characteristic value BW is link bandwidth and is in bps; the characteristic value FCT is the completion time of the data flow in ms.
5. The graph neural network-based power data stream transmission time inference method according to claim 1 or 4, characterized in that the frame for training GNN model in the second step comprises three GN blocks, the graph topology of GN blocks is the same but with different neural network parameters; the three GN blocks comprise a coding block Encode, a Core block Core and a decoding block Decode, wherein the input end of the coding block is connected with input training data, the output end of the coding block is connected with the input end of the Core block, the output end of the Core block is connected with the input end of the decoding block, the output end of the decoding block obtains a training result, and Hidden information Hidden (t +1) output after t +1 iterations of the Core block takes Hidden information Hidden (t) of t iterations and the output of the coding block as input; the coding block Encode performs 1 time of calculation of characteristic values of edges and nodes in the attribute graph, and is equivalent to coding; the Core block executes N rounds of message passing processing, and the input of the Core block is the output of the coding block Encode and the output in the previous step of the Core block; the decoding block Decode performs 1 calculation of the edge and node feature values in the attribute map, equivalent to decoding of data.
6. The electric power data stream transmission time inference method based on graph neural network as claimed in claim 5, wherein the method for training by supervised learning in step two is:
(1) initialization: taking a batch of training data from a data set G collected by the power data center when the iteration number t is 1 (G)x,Gy),GxSet of feature values for input maps, GyA set of eigenvalues for the output graph; inputting the characteristic value G of the graphxThe input coding block obtains Hidden (0) ═ Encode (G)x),Encode(Gx) Represents the characteristic value GxInputting the code block Encode to calculate;
(2) iteration: number of iterations t<Inputting Hidden (0) and Hidden (t) into the Core block Core to obtain Hidden (t +1) ═ Core (Hidden (0) and Hidden (t)); inputting Hidden (t +1) into a decoding block Decode to obtain output: o (t) ═ Decode (Hidden (t + 1)); adding the output O (t) to the set Gy_In (i) Gy_.append(O(t));
The Core () represents a calculation function of the Core block, the Decode () represents a calculation function of the Decode block Decode, N is the maximum iteration number, Hidden (t +1) represents Hidden information of t +1 iterations, and Hidden (t) represents Hidden information of t iterations; append means add, that is, output O (t) is added to set Gy_Performing the following steps;
(3) calculating the actual output Gy_And true output GyTo obtain a set of losses:
Figure FDA0002881945330000021
wherein a and b represent a set G, respectivelyy、Gy_X and y represent elements in sets a and b respectively;
(4) computing the average of the loss set loss _ setThe value:
Figure FDA0002881945330000022
optimizing neural network parameters in the GNN model through loss; the iteration time t is t + 1;
(5) and (3) if the iteration time t is equal to N or the training precision reaches more than 95%, namely the loss is less than 5%, returning to the step (2).
7. The method for reasoning transmission time of power data stream based on graph neural network as claimed in claim 5, wherein the reasoning of transmission time in step three is to collect data stream information in power data center, and map the data stream information to characteristic value G of input graph according to step one GNN modeling methodxThe characteristic value GxInputting the data into a trained GNN inference model for calculation to obtain an inference value Gy_p=Gy_[-1]I.e. array Gy_The last value of (1); and the accuracy of the inference, accuracycacy, 1-loss _ set-1]/Gy,loss_set[-1]Representing the last value in the set loss _ set.
CN202110002217.6A 2021-01-04 2021-01-04 Power data stream transmission time reasoning method based on graph neural network Withdrawn CN112801261A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110002217.6A CN112801261A (en) 2021-01-04 2021-01-04 Power data stream transmission time reasoning method based on graph neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110002217.6A CN112801261A (en) 2021-01-04 2021-01-04 Power data stream transmission time reasoning method based on graph neural network

Publications (1)

Publication Number Publication Date
CN112801261A true CN112801261A (en) 2021-05-14

Family

ID=75807673

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110002217.6A Withdrawn CN112801261A (en) 2021-01-04 2021-01-04 Power data stream transmission time reasoning method based on graph neural network

Country Status (1)

Country Link
CN (1) CN112801261A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111245673A (en) * 2019-12-30 2020-06-05 浙江工商大学 SDN time delay sensing method based on graph neural network
CN111726264A (en) * 2020-06-18 2020-09-29 中国电子科技集团公司第三十六研究所 Network protocol variation detection method, device, electronic equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111245673A (en) * 2019-12-30 2020-06-05 浙江工商大学 SDN time delay sensing method based on graph neural network
CN111726264A (en) * 2020-06-18 2020-09-29 中国电子科技集团公司第三十六研究所 Network protocol variation detection method, device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JUNFEI LI 等: "Traffic modeling and optimization in datacenters with graph neural network", 《ELSEVIER》 *

Similar Documents

Publication Publication Date Title
CN107172166B (en) Cloud and mist computing system for industrial intelligent service
CN105515987B (en) A kind of mapping method based on SDN framework Virtual optical-fiber networks
CN111245718B (en) Routing optimization method based on SDN context awareness
CN113285831B (en) Network behavior knowledge intelligent learning method and device, computer equipment and storage medium
CN112468401B (en) Network-on-chip routing communication method for brain-like processor and network-on-chip
CN111770019A (en) Q-learning optical network-on-chip self-adaptive route planning method based on Dijkstra algorithm
CN110224427B (en) Information physical system modeling method based on micro-grid energy control strategy
Liu Intelligent routing based on deep reinforcement learning in software-defined data-center networks
Li et al. Traffic modeling and optimization in datacenters with graph neural network
Zhuang et al. Toward greater intelligence in route planning: A graph-aware deep learning approach
CN114697229A (en) Construction method and application of distributed routing planning model
CN106936645A (en) The optimization method of the tree network topology structure based on queueing theory
Rkhami et al. On the use of graph neural networks for virtual network embedding
CN114710439B (en) Network energy consumption and throughput joint optimization routing method based on deep reinforcement learning
CN115907038A (en) Multivariate control decision-making method based on federated split learning framework
Xu et al. Living with artificial intelligence: A paradigm shift toward future network traffic control
Jahani et al. Green virtual network embedding with supervised self-organizing map
CN111246320A (en) Deep reinforcement learning flow dispersion method in cloud-fog elastic optical network
CN103944748B (en) Network-key-node self-similar-traffic generation simplification method based on genetic algorithm
CN107528731A (en) Network applied to NS3 parallel artificials splits optimized algorithm
Guo et al. A novel cluster-head selection algorithm based on hybrid genetic optimization for wireless sensor networks
CN112867092A (en) Intelligent data routing method for mobile edge computing network
Mukhtar et al. CCGN: Centralized collaborative graphical transformer multi-agent reinforcement learning for multi-intersection signal free-corridor
CN112801261A (en) Power data stream transmission time reasoning method based on graph neural network
CN116595690A (en) Computer network performance evaluation model construction method, system, equipment and medium based on knowledge fusion graph neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20210514

WW01 Invention patent application withdrawn after publication