CN112073983A - Wireless data center network topology optimization method and system based on flow prediction - Google Patents

Wireless data center network topology optimization method and system based on flow prediction Download PDF

Info

Publication number
CN112073983A
CN112073983A CN202010847624.2A CN202010847624A CN112073983A CN 112073983 A CN112073983 A CN 112073983A CN 202010847624 A CN202010847624 A CN 202010847624A CN 112073983 A CN112073983 A CN 112073983A
Authority
CN
China
Prior art keywords
network
node
link
data
network topology
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010847624.2A
Other languages
Chinese (zh)
Other versions
CN112073983B (en
Inventor
叶彬彬
罗威
蔡万升
李洋
赵高峰
龚亮亮
王宝海
刘金锁
谷志群
高亮
姜元建
殷伟俊
毕善玉
张影
王斌
蒋政
顾辉
顾仁涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
State Grid Zhejiang Electric Power Co Ltd
Beijing University of Posts and Telecommunications
Nari Information and Communication Technology Co
State Grid Electric Power Research Institute
Original Assignee
State Grid Corp of China SGCC
State Grid Zhejiang Electric Power Co Ltd
Beijing University of Posts and Telecommunications
Nari Information and Communication Technology Co
State Grid Electric Power Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, State Grid Zhejiang Electric Power Co Ltd, Beijing University of Posts and Telecommunications, Nari Information and Communication Technology Co, State Grid Electric Power Research Institute filed Critical State Grid Corp of China SGCC
Priority to CN202010847624.2A priority Critical patent/CN112073983B/en
Publication of CN112073983A publication Critical patent/CN112073983A/en
Application granted granted Critical
Publication of CN112073983B publication Critical patent/CN112073983B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/04Arrangements for maintaining operational condition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/06Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The invention discloses a wireless data center network topology optimization method and a system based on flow prediction, wherein the method comprises the following steps: step SS 1: predicting network service flow in a periodic time period by adopting input parameters to obtain data service demand D between different racks in the networksd(ii) a Step SS 2: according to the data traffic demand D between different racks in the network obtained in the step SS1sdConstructing a network topology with maximum throughput and minimum network overhead; step SS 3: based on the network topology obtained in step SS2, the data routing is established by performing calculation distribution on the size request of the data stream passing through each link in turn. Aiming at the high dynamic property of the data volume of the data center network, the optimization scheme of the data center network topology based on the FSO is researched to improve the utilization efficiency of network resourcesRate and throughput of the network.

Description

Wireless data center network topology optimization method and system based on flow prediction
Technical Field
The invention relates to a wireless data center network topology optimization method and system based on flow prediction, and belongs to the technical field of network topology optimization.
Background
With the rapid development of cloud computing, the scale of a data center network as an infrastructure is also rapidly expanded. The traditional wired data center network adopts a static topological structure, and in the face of high dynamic large data traffic, the numerous and complicated wired architecture thereof brings huge challenges to the aspects of network scale expansion, energy consumption management, operation and maintenance cost and the like. To address these challenges, FSO offers the potential to implement high performance wireless data center networks with dynamic topologies. The FSO has the characteristic advantages of high bandwidth, dynamic link establishment, flexibility and controllability, and can effectively improve the network transmission performance and reduce the operation cost. However, the FSO uses a point-to-point laser beam for data transmission in space, and the transmitting end and the receiving end of the laser beam need to be strictly required to be in a straight line to ensure the reliability of the link. In addition, the number of links in the network is limited by the number of laser transceivers in the network. Therefore, challenges are presented for FSO-based wireless data center network topology configuration.
Due to the high dynamics of traffic, traffic congestion hot spots typically occur on each level of fabric in a data center network. Statistically, 86% of the links in the data center network are congested for more than 10 seconds, and 15% of the links are congested for more than 100 seconds. Due to the bursty nature of some traffic flows, it causes a short congestion state that is typically correlated with tens of links at the same time. At the same time, long periods of congestion are typically concentrated on a small set of links. These congestion hot spots caused by large flows with long duration will occupy the switch cache for a long time, thereby causing large queuing delay for small flow transmission, and finally causing the transmission performance to be reduced. Therefore, how to improve the throughput of the network is a key issue for the topology configuration of the wireless data center network.
As in the existing wireless data center network architecture Firefly, the FSO transceivers are placed on a rack. Each FSO transceiver is equipped with a plurality of switchable mirrors. The switchable mirrors have been pre-configured and aligned with the receiving ends on the other shelf. The FSO link may deflect the incident beam by rotating the mirror appropriately. The research aims to provide a centralized topological structure control method with the maximum network capacity, and establishes a data route to dynamically allocate the network bandwidth. However, the current wireless data center network topology research has certain limitations. Firstly, due to the high dynamic of the data center network flow, the network topology needs to be adjusted in real time to meet the service requirement of the network, and how to accurately predict the service requirement of the network is the key to ensure the network performance. The current scheme lacks a prediction process for data traffic; secondly, only the capacity and performance of the network are considered, and because the number of the FSO link optical paths is limited by the number of the transceivers in the network, the network overhead is increased and the utilization efficiency of network resources is reduced due to the large number of links configured in the network.
Disclosure of Invention
In the prior art, the throughput and capacity of the network are mainly considered in the process of network optimization, and the overhead (the number of FSO links in the network) of the network is not optimized. Due to the characteristic of the FSO point-to-point links, each FSO link in the network needs a laser transmitter and a receiver, and when the data volume is large and the number of nodes in the network is large, large network resources need to be consumed to construct enough FSO links to ensure the network performance, so that the network overhead is increased. Aiming at the problem, the invention provides a topology optimization scheme of the wireless data center network by simultaneously taking the maximum throughput and the minimum network overhead as double targets.
The invention mainly solves the problems of small network capacity and high cost faced by the wireless data center network topology optimization configuration facing Free space optical communication technology (FSO). The invention mainly uses FSO for inter-frame communication in the data center network, and due to the increase of data volume and high dynamic property of data, the high-efficiency utilization of network capacity and resources becomes the main factor for the topology resetting consideration of the data center network. Therefore, aiming at the high dynamic property of the data volume of the data center network, the optimization scheme of the data center network topology based on the FSO is researched to improve the utilization efficiency of network resources and the throughput of the network.
Aiming at the problems, in order to improve the utilization rate of network resources, the invention combines the communication characteristics of FSO, and optimizes the network topology by predicting the data flow in a periodic time period and aiming at the maximum network throughput and the minimum network overhead.
The invention specifically adopts the following technical scheme: the wireless data center network topology optimization method based on flow prediction comprises the following steps:
step SS 1: predicting network service flow in a periodic time period by adopting input parameters to obtain data service demand D between different racks in the networksd
Step SS 2: according to the data traffic demand D between different racks in the network obtained in the step SS1sdConstructing a network topology with maximum throughput and minimum network overhead;
step SS 3: based on the network topology obtained in step SS2, the data routing is established by performing calculation distribution on the size request of the data stream passing through each link in turn.
As a preferred embodiment, the input parameters in step SS1 specifically include:
setting a set of racks in a data center network as N, installing an FSO transceiver on each rack, communicating each rack through an FSO link, and setting the node degree of each rack as the maximum number of the FSO links which can be connected;
binary b for the active state of FSO linkijMeaning if the FSO link is active, b ij1 is ═ 1; otherwise bij=0;
All the racks have the same node degree, and the node degree of any rack in the network is less than or equal to the number of the transceivers installed on the rack;
the power consumed by the transmitting end of the FSO link is the overhead c required for constructing the FSO linkij
The data flow aggregated on each FSO link cannot exceed the capacity R of the linkij
Data flow f from source chassis node s to destination chassis node dsdOver a link lijData of
Figure BDA0002643619440000042
A routing mode of shunting a plurality of paths is adopted in the network, and the data stream in the network is divided into a plurality of sub-data streams;
the total transmission power consumed by all FSO transceivers in the entire FSO-based wireless data center network topology is the overhead of the network.
As a preferred embodiment, step SS1 is a step for obtaining data traffic demand D between different racks in the networksdThe method specifically comprises the following steps: the forecast defines the traffic request quantity between any pair of racks s and D as DsdThen data flow f between each pair of chassis nodessdLess than or equal to service request quantity DsdPredicting the data traffic demand D between different frames at different times in a periodsd
As a preferred embodiment, step SS2 specifically includes:
step SS 21: selecting an extension path, defining a benefit function, and evaluating each extension path, wherein the benefit function of the extension path is described as follows:
Figure BDA0002643619440000041
wherein D issdIs the traffic demand, h, between rack node s and node dsdIs the minimum hop count, sigma, between the node s and the node d in the current network topologyijbijcijRefers to power consumption in the network, i.e., overhead of the network; correspondingly, if an extended path is selected to update the current network topology, the difference between the updated network benefit value and the current network topology benefit value is
Figure BDA0002643619440000051
Step SS 22: searching a set M of the extended path updating activation links, and constructing an optimal network topology structure, specifically comprising:
step SS 221: finding an extended path under the node degree limiting condition, and selecting a link needing to be activated;
step SS 222: and judging whether the selected link can update the network topology according to the network benefit value so as to improve the performance of the network.
As a preferred embodiment, the step SS221 specifically includes: firstly, judging and selecting a node a with the node degree smaller than a threshold value; secondly, selecting all nodes which are adjacent to the node b and do not establish a link, and selecting the node b of which the node degree is smaller than a threshold value; thirdly, finding all nodes c of the link established with the node b, judging whether the node degree of the node c connected with the node b is smaller than a threshold value, if the node degree of the node c is larger than or equal to the threshold value, reselecting the node b, and if the node degree is smaller than the threshold value, selecting the node c as a node in the extended path; then, a node d which is adjacent to the node c and has no link established with the node c is selected as one node in the extended path, and the selected extended path a-b-c-d is the selected link to be activated.
As a preferred embodiment, the step SS222 specifically includes:
changing the network topology according to the selected expansion path, and changing the state of the link passing through the path along the expansion path, namely changing the activated link into an inactivated link and activating the inactivated link, thereby forming a new network topology;
when a new network topology is formed, judging whether the network topology performance can be improved or not according to the change quantity delta u of the network benefit values before and after updating; if the delta u is larger than u, and u is the updated network benefit value, judging that the selected extended path can improve the performance of the network topology; otherwise, if Δ u is less than or equal to u, then the step SS221 needs to be returned to reselect the extension path;
the algorithm terminates until the node degrees of all nodes in the network topology have reached a threshold.
As a preferred embodiment, step SS3 specifically includes:
step SS 31: the service volumes are arranged in a descending order, and the service requirements are processed in sequence from big to small;
step SS 32: aiming at each service, selecting a maximum augmentation path for the service according to a network maximum flow algorithm, wherein the augmentation path is composed of a series of activated and inactivated links, and a starting link and a terminating link are both inactivated links;
step SS 33: calculating the residual capacity of a link in the network and updating the current state of the network;
step SS 34: until no augmented path can be found in the network, the algorithm is terminated;
step SS 35: all data traffic through the network is summed.
The invention also provides a wireless data center network topology optimization system based on flow prediction, which comprises the following steps:
a data traffic demand generation module to: predicting network service flow in a periodic time period by adopting input parameters to obtain data service demand D between different racks in the networksd
A network topology generation module to: according to the network acquired in the data service demand generation moduleData traffic demand D between same rackssdConstructing a network topology with maximum throughput and minimum network overhead;
a data route generation module to: and calculating and distributing the size requests of the data streams passing through each link in sequence to establish a data route based on the network topology obtained by the network topology generating module.
As a preferred embodiment, the input parameters in the data traffic demand generation module specifically include:
setting a set of racks in a data center network as N, installing an FSO transceiver on each rack, communicating each rack through an FSO link, and setting the node degree of each rack as the maximum number of the FSO links which can be connected;
binary b for the active state of FSO linkijMeaning if the FSO link is active, b ij1 is ═ 1; otherwise bij=0;
All the racks have the same node degree, and the node degree of any rack in the network is less than or equal to the number of the transceivers installed on the rack;
the power consumed by the transmitting end of the FSO link is the overhead c required for constructing the FSO linkij
The data flow aggregated on each FSO link cannot exceed the capacity R of the linkij
Data flow f from source chassis node s to destination chassis node dsdOver a link lijData of
Figure BDA0002643619440000071
A routing mode of shunting a plurality of paths is adopted in the network, and the data stream in the network is divided into a plurality of sub-data streams;
the total transmission power consumed by all FSO transceivers in the entire FSO-based wireless data center network topology is the overhead of the network.
As a preferred embodiment, the data traffic demand generation module obtains data traffic demand D between different racks in the networksdThe method specifically comprises the following steps: the forecast defines the traffic request quantity between any pair of racks s and D as DsdThen data flow f between each pair of chassis nodessdLess than or equal to service request quantity DsdPredicting the data traffic demand D between different frames at different times in a periodsd
As a preferred embodiment, the network topology generating module specifically includes:
selecting an extension path, defining a benefit function, and evaluating each extension path, wherein the benefit function of the extension path is described as follows:
Figure BDA0002643619440000081
wherein D issdIs the traffic demand, h, between rack node s and node dsdIs the minimum hop count, sigma, between the node s and the node d in the current network topologyijbijcijRefers to power consumption in the network, i.e., overhead of the network; correspondingly, if an extended path is selected to update the current network topology, the difference between the updated network benefit value and the current network topology benefit value is
Figure BDA0002643619440000082
Searching a set M of the extended path updating activation links, and constructing an optimal network topology structure, specifically comprising:
finding an extended path under the node degree limiting condition, and selecting a link needing to be activated;
and judging whether the selected link can update the network topology according to the network benefit value so as to improve the performance of the network.
As a preferred embodiment, the finding of the extended path under the node degree constraint condition in the network topology generating module specifically includes: firstly, judging and selecting a node a with the node degree smaller than a threshold value; secondly, selecting all nodes which are adjacent to the node b and do not establish a link, and selecting the node b of which the node degree is smaller than a threshold value; thirdly, finding all nodes c of the link established with the node b, judging whether the node degree of the node c connected with the node b is smaller than a threshold value, if the node degree of the node c is larger than or equal to the threshold value, reselecting the node b, and if the node degree is smaller than the threshold value, selecting the node c as a node in the extended path; then, a node d which is adjacent to the node c and has no link established with the node c is selected as one node in the extended path, and the selected extended path a-b-c-d is the selected link to be activated.
As a preferred embodiment, the determining, by the network topology generating module according to the network benefit value, whether the selected link can update the network topology to improve the performance of the network specifically includes:
changing the network topology according to the selected expansion path, and changing the state of the link passing through the path along the expansion path, namely changing the activated link into an inactivated link and activating the inactivated link, thereby forming a new network topology;
when a new network topology is formed, judging whether the network topology performance can be improved or not according to the change quantity delta u of the network benefit values before and after updating; if delta u is larger than u, and u is the updated network benefit value, judging that the selected expansion path can improve the performance of the network topology, otherwise, if delta u is smaller than or equal to u, returning to reselect the expansion path;
the algorithm terminates until the node degrees of all nodes in the network topology have reached a threshold.
As a preferred embodiment, the data route generating module specifically includes:
the service volumes are arranged in a descending order, and the service requirements are processed in sequence from big to small;
aiming at each service, selecting a maximum augmentation path for the service according to a network maximum flow algorithm, wherein the augmentation path is composed of a series of activated and inactivated links, and a starting link and a terminating link are both inactivated links;
calculating the residual capacity of a link in the network and updating the current state of the network;
until no augmented path can be found in the network, the algorithm is terminated;
all data traffic through the network is summed.
The invention achieves the following beneficial effects: first, compared with the existing wireless data center network research, the method of the invention firstly predicts the network flow through an intelligent prediction method, so that the network topology performance reduction caused by service change can be reduced, and the introduction of more network overhead caused by frequent topology change can be effectively prevented. Secondly, in the network topology optimization process, the throughput and the network overhead of the network are considered simultaneously. Thirdly, the invention introduces the calculation of the network benefit value by exploring the relationship between the hop count of all nodes in the network and the network throughput so as to continuously optimize the network topology. Secondly, the relationship between the node hop count and the throughput of the invention is as follows: the throughput of the network is maximized when the number of hops traversed between the source node and the destination node for all service requests in the network is minimized. Third, the present invention assumes that the number of transceivers installed in each chassis in the network is the same, and that the traffic requests between each pair of nodes are the same. The data flow between the source chassis node s to the destination node d is fsd,hsdIs the minimum number of hops passed between s and d. The number of racks in the network is N, and the average hop count between every two racks in the network is proportional to the sum of the minimum hop counts between all the racks in the network. And the network capacity is equal to the product of the average number of hops between each pair of racks in the network and the data flow between the racks. Assuming that each rack has the same node degree, i.e. the number of connecting links is the same, the maximum capacity of the network is fixed, and as the average hop count between nodes in the network decreases, the data flow fsdAnd (4) increasing. At the same time, fsdAn increase in (b) means an increase in network throughput. Thus, network throughput is maximized when the average number of hops in the network is minimized. Fourth, the present invention's maximizing network throughput and minimizing network overhead can be translated into maximizing ΣsdDsdhsdSum of values minimized sigmaijbijcijIs thus according to the disclosureThe equation (2) calculates the change Δ u of the path between the current network and the updated network benefit value, so that the throughput of the network can be continuously improved, and the network overhead can be reduced.
Drawings
FIG. 1 is a flow chart of a method for traffic prediction based topology optimization of a wireless data center network of the present invention;
FIG. 2 is a flow chart of network traffic flow prediction of the present invention;
FIG. 3 is an expanded path schematic of the present invention;
FIG. 4 is a topology construction flow of the present invention;
FIG. 5 is a schematic diagram of the selection process of the extended paths a-b-c-d of the present invention;
fig. 6 is a flow chart of data route establishment.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
Example 1: in the prior art, the throughput and capacity of the network are mainly considered in the process of network optimization, and the overhead (the number of FSO links in the network) of the network is not optimized. Due to the characteristic of the FSO point-to-point links, each FSO link in the network needs a laser transmitter and a receiver, and when the data volume is large and the number of nodes in the network is large, large network resources need to be consumed to construct enough FSO links to ensure the network performance, so that the network overhead is increased. Aiming at the problem, the invention provides a topology optimization scheme of the wireless data center network by simultaneously taking the maximum throughput and the minimum network overhead as double targets.
According to the analysis, the invention provides a topology optimization scheme with the maximum network throughput and the minimum network overhead by considering the environmental characteristics of the data center network and the communication characteristics of the FSO aiming at the problems of the utilization rate of the network resources and the network capacity of the data center. In the data center network based on the FSO, the FSO module is arranged at the Top end (Top-of-Rack, ToR) of each frame, the angle of the transceiver can be flexibly adjusted, and the normal receiving of light beam signals is ensured. Assume that the set of racks within the data center network is N. Each chassis communicates over an FSO link. The number of FSO transceivers per rack is limited, i.e., the number of FSO links that can be connected per rack is limited. The node degree of the rack is defined as the maximum number of links which can be connected.
Inputting parameters
Binary b for link activation statusijIndicates if the link is active, then b ij1 is ═ 1; otherwise bij=0;
All the racks have the same node degree, namely the maximum number of connectable links, and the node degree of any rack in the network is less than or equal to the number of transceivers installed on the rack;
the power consumed by the transmitting end of the FSO link is the overhead c required to build the linkij
The data flow aggregated on each FSO link cannot exceed the capacity R of the linkij
Data flow f from source chassis node s to destination chassis node dsdOver a link lijData of
Figure BDA0002643619440000131
A routing mode of shunting a plurality of paths is adopted in the network, and the data stream in the network is divided into a plurality of sub-data streams;
the total transmission power consumed by all FSO transceivers in the entire FSO-based wireless data center network topology is the overhead of the network.
The topology optimization scheme comprises the following specific steps: the invention designs a topology optimization scheme of high-efficiency network resource utilization rate based on intelligent traffic prediction for a wireless data center network, and the main idea of the scheme is to decompose the topology configuration problem of the data center network into three sub-problems, such as the scheme design shown in figure 1, wherein firstly, the problem of network service traffic prediction in a periodic time period is solved; secondly, selecting links needing to be activated among the racks to construct an optimal network topology structure; thirdly, establishing a data route on the basis of the optimal network topology structure.
(1) The problem of network service flow prediction in a periodic time period is as follows: the forecast defines the traffic request quantity between any pair of racks s and D as DsdThen data flow f between each pair of chassis nodessdLess than or equal to service request quantity DsdPredicting the data traffic demand D between different racks at different times in a cyclesdThe process is shown in fig. 2. First, data features are extracted. The method comprises the steps of carrying out characteristic analysis on flow transmission data among different racks in a data network, and selecting time and data volume as two main characteristics of data extraction. Secondly, whether the selected data meet the characteristic requirements is judged. Before screening data as the input of the flow prediction model, whether the data meets the characteristic requirements needs to be judged, the data with obvious characteristics and small error deviation is selected as the input of the flow prediction model, and the data with large error deviation is discarded. Again, the screening data is grouped. 80% of the screening data was used as the training set for neural network training, and 20% of the screening data was used as the test set. And finishing the training of the neural network by the data in the training set, and performing model prediction by using the test set. The set partitioning of the raw data is also done to prevent overfitting of the model. The model is then trained on the data. Inputting screened data to perform a trainer of a flow prediction model, wherein an LSTM long-short term memory cycle neural network is built in TensorFlow, a training process is completed by means of TensorFlow, and a data quantity prediction result at a future moment is output. And finally, outputting a prediction result.
(2) The problem of constructing the network topology with the maximum network throughput and the minimum overhead is as follows: and (3) constructing a network topology with maximum throughput and minimum network overhead according to the service demands of different racks in the network obtained in the step (1). The core idea of the invention is to select the best active link set among the racks based on the joint optimization network throughput and network overhead of the matching idea. The invention updates the set M of active links by finding extended paths. The extended path sought is a path in which inactive links and active links alternate in the network, similar to the augmented path, as shown in fig. 3.
Selection of an extension path: not all the extension paths can update the topological structure of the network, and the invention evaluates each extension path by defining a benefit function. The benefit function of the extended path is described as follows:
Figure BDA0002643619440000141
wherein D issdIs the traffic demand, h, between rack node s and node dsdIs the minimum hop count, sigma, between the node s and the node d in the current network topologyijbijcijRefers to the power consumption in the network, i.e., the overhead of the network. Correspondingly, if an extended path is selected to update the current network topology, the difference between the updated network benefit value and the current network topology benefit value is
Figure BDA0002643619440000151
And (3) topology construction process: in order to find the active link set M, an optimal network topology is constructed. The invention is divided into two stages, and the specific flow is shown in figure 4: 1) finding an extended path under the node degree limiting condition, and determining and selecting a link needing to be activated; 2) and judging whether the selected link can update the network topology according to the network benefit value so as to improve the performance of the network.
In the first stage, an extended path under the node degree limit condition is searched. The process is similar to the process of finding an augmented path in hungarian algorithm, where an extended path starts with one inactive link, alternates between active and inactive links in the network, and ends with an inactive link, as shown in fig. 3. In order to ensure that the number of activated links in the network topology satisfies that the node degree of each rack node is smaller than the configured number of transceivers, the number of links connected to two end points of one link is determined before the link is selected to be activated. The selection flow of the extended paths a-b-c-d is shown in fig. 5. Firstly, judging and selecting a node a with the node degree smaller than a threshold value; secondly, selecting all nodes which are adjacent to the node b and do not establish a link, and selecting the node b of which the node degree is smaller than a threshold value; thirdly, finding all nodes c of the link established with the node b, judging whether the node degree of the node c connected with the node b is smaller than a threshold value, if the node degree of the node c is larger than or equal to the threshold value, reselecting the node b, and if the node degree is smaller than the threshold value, selecting the node c as a node in the extended path; then, a node d which is adjacent to the node c and does not establish a link with the node c is selected as one node in the extended path, and then a-b-c-d is the searched extended path.
In the second stage, the found extension path is evaluated according to the formula (2) to determine whether the performance of the network topology can be improved. The specific decision process for the extended path is shown in fig. 4:
Figure BDA0002643619440000161
and changing the network topology according to the selected expansion path. Along the extended path, the state of the link passing through the path is changed, that is, the active link is changed into the inactive link, and the inactive link is activated, so that a new network topology is formed.
Figure BDA0002643619440000162
And when a new network topology is formed, judging whether the network topology performance can be improved or not according to the change quantity delta u of the network benefit values before and after updating. If the delta u is more than u, the selected expansion path can improve the performance of the network topology (increase the network throughput and reduce the network overhead); otherwise, if Δ u is less than or equal to u, it indicates that the expansion path selected in 1) cannot improve the throughput of the current network topology, and at this time, it needs to return to the first stage to reselect the expansion path.
Figure BDA0002643619440000163
Until the node degrees of all nodes in the network topology reach the threshold value, the expansion path can not be found, so that the throughput of the network topology can be improved, and the network overhead is reduced, namely the expansion path is foundThe algorithm terminates until the extension path satisfies Δ u > u.
(3) Problem of establishing data route
On the basis of the constructed network topology, the size requests of the data streams passing through each link are sequentially calculated and distributed according to the maximum stream algorithm thought, and the specific process is as follows:
Figure BDA0002643619440000164
and performing descending order on the traffic, and sequentially processing the service demands according to the descending order.
Figure BDA0002643619440000165
And aiming at each service, selecting a maximum augmentation path for the service according to a network maximum flow algorithm, wherein the augmentation path is composed of a series of activated and inactivated links, and the starting link and the terminating link are both inactivated links.
Figure BDA0002643619440000171
And calculating the residual capacity of the link in the network and updating the current state of the network.
Figure BDA0002643619440000172
Until no augmented path is found in the network, the algorithm terminates.
Figure BDA0002643619440000173
All data traffic through the network is summed.
The data route establishment procedure can satisfy as much data traffic as possible in a capacity limited network, i.e. guarantee that the network throughput is maximized in the constructed network topology.
Example 2: the invention also provides a wireless data center network topology optimization system based on flow prediction, which comprises:
a data traffic demand generation module to: predicting network service flow in a periodic time period by adopting input parameters to obtain data service demand D between different racks in the networksd
A network topology generation module to: according to the data service demand D between different racks in the network obtained in the data service demand generation modulesdConstructing a network topology with maximum throughput and minimum network overhead;
a data route generation module to: and calculating and distributing the size requests of the data streams passing through each link in sequence to establish a data route based on the network topology obtained by the network topology generating module.
Optionally, the input parameters in the data traffic demand generation module specifically include:
setting a set of racks in a data center network as N, installing an FSO transceiver on each rack, communicating each rack through an FSO link, and setting the node degree of each rack as the maximum number of the FSO links which can be connected;
binary b for the active state of FSO linkijMeaning if the FSO link is active, b ij1 is ═ 1; otherwise bij=0;
All the racks have the same node degree, namely the maximum number of the connectable FSO links, and the node degree of any rack in the network is less than or equal to the number of the transceivers installed on the rack;
the power consumed by the transmitting end of the FSO link is the overhead c required for constructing the FSO linkij
The data flow aggregated on each FSO link cannot exceed the capacity R of the linkij
Data flow f from source chassis node s to destination chassis node dsdOver a link lijData of
Figure BDA0002643619440000181
A routing mode of shunting a plurality of paths is adopted in the network, and the data stream in the network is divided into a plurality of sub-data streams;
the total transmission power consumed by all FSO transceivers in the entire FSO-based wireless data center network topology is the overhead of the network.
Optionally, the data service demand generation module obtains data service demand D between different racks in the networksdThe method specifically comprises the following steps: the forecast defines the traffic request quantity between any pair of racks s and D as DsdThen data flow f between each pair of chassis nodessdLess than or equal to service request quantity DsdPredicting the data traffic demand D between different frames at different times in a periodsd. The method specifically comprises the following steps:
step SS 11: data feature extraction, namely performing feature analysis on flow transmission data among different racks in a data network, and selecting time and data volume as two main features of data extraction;
step SS 12: judging whether the selected data meets the characteristic requirements, judging whether the data meets the characteristic requirements before screening the data as the input of a flow prediction model, selecting the data with obvious characteristics and small error deviation as the input of the flow prediction model, and discarding the data with large error deviation;
step SS 13: grouping the screening data, wherein 80% of the screening data is used as a training set for neural network training, and 20% of the screening data is used as a testing set; completing the training of the neural network by the data in the training set, and predicting a model by using the test set; carrying out set division on original data;
step SS 14: training the model according to the data, inputting the screened data to perform the training of the flow prediction model, wherein the LSTM long-short term memory cycle neural network is built in the TensorFlow, the training process is completed by relying on the TensorFlow, and a data quantity prediction result at the future moment is output.
Optionally, the network topology generating module specifically includes:
selecting an extension path, defining a benefit function, and evaluating each extension path, wherein the benefit function of the extension path is described as follows:
Figure BDA0002643619440000191
wherein D issdIs the traffic demand, h, between rack node s and node dsdIs the minimum hop count, sigma, between the node s and the node d in the current network topologyijbijcijRefers to power consumption in the network, i.e., overhead of the network; correspondingly, if an extended path is selected to update the current network topology, the difference between the updated network benefit value and the current network topology benefit value is
Figure BDA0002643619440000192
Searching a set M of the extended path updating activation links, and constructing an optimal network topology structure, specifically comprising:
finding an extended path under the node degree limiting condition, and selecting a link needing to be activated;
and judging whether the selected link can update the network topology according to the network benefit value so as to improve the performance of the network.
Optionally, the finding of the extension path under the node degree limitation condition in the network topology generation module specifically includes: firstly, judging and selecting a node a with the node degree smaller than a threshold value; secondly, selecting all nodes which are adjacent to the node b and do not establish a link, and selecting the node b of which the node degree is smaller than a threshold value; thirdly, finding all nodes c of the link established with the node b, judging whether the node degree of the node c connected with the node b is smaller than a threshold value, if the node degree of the node c is larger than or equal to the threshold value, reselecting the node b, and if the node degree is smaller than the threshold value, selecting the node c as a node in the extended path; then, a node d which is adjacent to the node c and has no link established with the node c is selected as one node in the extended path, and the selected extended path a-b-c-d is the selected link to be activated.
Optionally, the determining, by the network topology generating module, whether the selected link can be updated according to the network benefit value to improve the performance of the network specifically includes:
changing the network topology according to the selected expansion path, and changing the state of the link passing through the path along the expansion path, namely changing the activated link into an inactivated link and activating the inactivated link, thereby forming a new network topology;
when a new network topology is formed, judging whether the network topology performance can be improved or not according to the change quantity delta u of the network benefit values before and after updating; if delta u is larger than u, and u is the updated network benefit value, judging that the selected expansion path can improve the performance of the network topology, namely, the network throughput can be increased and the network overhead can be reduced, otherwise, if delta u is less than or equal to u, indicating that the selected expansion path cannot improve the throughput of the current network topology, and returning to reselect the expansion path;
and when the node degrees of all the nodes in the network topology reach the threshold value, the throughput of the network topology can be improved and the network overhead is reduced because no extension path can be found, namely the extension path can be found to meet the condition that delta u is more than u, the algorithm is terminated.
Optionally, the data route generating module specifically includes:
the service volumes are arranged in a descending order, and the service requirements are processed in sequence from big to small;
aiming at each service, selecting a maximum augmentation path for the service according to a network maximum flow algorithm, wherein the augmentation path is composed of a series of activated and inactivated links, and a starting link and a terminating link are both inactivated links;
calculating the residual capacity of a link in the network and updating the current state of the network;
until no augmented path can be found in the network, the algorithm is terminated;
all data traffic through the network is summed.
Noun interpretation of related art terms:
LSTM: the Long-Short Term Memory Recurrent Neural Network (LSTM-RNN) is designed mainly for solving the Long-Term dependence problem, and the LSTM is essentially to make corresponding modification on the structure layer of the common RNN Neural Network to remember information in a Long period. In conventional RNN network architectures, there is typically a common connection structure with only a single layer within a neuron. LSTM also have a similar structure, but they are no longer a single layer, but use four interacting layers, with the structure of the gates (gates) enabling information selectivity by operation of the sigmoid neural layer and point-by-point multiplication.
Tensorflow: TensorFlow is an open source software library, uses data flow graphs for numerical computation, and is used for machine learning and deep neural network research, but the generality of the system enables the system to be widely used in other computing fields.
Firefly network architecture: FSO is used for inter-chassis communication. Each chassis mounts an FSO transceiver. Each FSO transceiver is equipped with a plurality of switchable mirrors. The switchable mirrors have been pre-configured and aligned with the receiving ends on the other shelf. The FSO link may deflect the incident beam by rotating the mirror appropriately.
The Hungarian algorithm: the Hungarian algorithm is the most common algorithm for bipartite graph matching, the core of the algorithm is to find an augmented path, and the algorithm is an algorithm for finding the maximum matching of bipartite graphs by using the augmented path. Suppose G (V, E) is an undirected graph. If the vertex set V can be divided into two mutually disjoint subsets and both endpoints of any edge belong to both sets, the subset with the largest number of edges in such subsets is selected as the bipartite graph matching problem. The basic principle is as follows: firstly, defining an edge set M as null; finding an augmentation path P, and obtaining larger matching M' to replace M through XOR operation; and thirdly, repeating the step two until no new augmentation path can be found.
Maximum flow algorithm: network-flows (network-flows) is a problem solving method of analog water flow, and is closely related to linear programming. Maximum flow problem (maximum flow proplem), a combinatorial optimization problem, is to discuss how to fully utilize the capacity of the device to maximize the flow of traffic for best results. The network flow graph is a directed graph with only one source point and sink point, and the maximum flow is the maximum water flow between the source point and the sink point.
An amplification path: belonging to the concept of Hungarian algorithm, if a path P is a path connecting two unmatched vertexes in a graph G, and an edge belonging to M and an edge not belonging to M appear alternately on P, P is called an augmented path relative to M.
Average hop count between network nodes: the sum of the minimum hop counts between every two nodes in the network is compared with the number ratio of the nodes in the network, wherein the minimum hop count between any two nodes in the network can be solved according to a shortest path algorithm.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting the same, and although the present invention is described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that: modifications and equivalents may be made to the embodiments of the invention without departing from the spirit and scope of the invention, which is to be covered by the claims.

Claims (14)

1. The wireless data center network topology optimization method based on flow prediction is characterized by comprising the following steps:
step SS 1: predicting network service flow in a periodic time period by adopting input parameters to obtain data service demand D between different racks in the networksd
Step SS 2: according to the data traffic demand D between different racks in the network obtained in the step SS1sdConstructing a network topology with maximum throughput and minimum network overhead;
step SS 3: based on the network topology obtained in step SS2, the data routing is established by performing calculation distribution on the size request of the data stream passing through each link in turn.
2. The traffic prediction-based wireless data center network topology optimization method according to claim 1, wherein the input parameters in step SS1 specifically include:
setting a set of racks in a data center network as N, installing an FSO transceiver on each rack, communicating each rack through an FSO link, and setting the node degree of each rack as the maximum number of the FSO links which can be connected;
binary b for the active state of FSO linkijMeaning if the FSO link is active, bij1 is ═ 1; otherwise bij=0;
All the racks have the same node degree, and the node degree of any rack in the network is less than or equal to the number of the transceivers installed on the rack;
the power consumed by the transmitting end of the FSO link is the overhead c required for constructing the FSO linkij
The data flow aggregated on each FSO link cannot exceed the capacity R of the linkij
Data flow f from source chassis node s to destination chassis node dsdOver a link lijData of
Figure FDA0002643619430000021
A routing mode of shunting a plurality of paths is adopted in the network, and the data stream in the network is divided into a plurality of sub-data streams;
the total transmission power consumed by all FSO transceivers in the entire FSO-based wireless data center network topology is the overhead of the network.
3. The method for optimizing the topology of the wireless data center network based on traffic prediction according to claim 2, wherein the step of obtaining the data traffic demand D between different racks in the network in SS1sdThe method specifically comprises the following steps: the forecast defines the traffic request quantity between any pair of racks s and D as DsdThen data flow f between each pair of chassis nodessdLess than or equal to service request quantity DsdPredicting the data traffic demand D between different frames at different times in a periodsd
4. The method for optimizing the topology of the wireless data center network based on traffic prediction according to claim 1, wherein the step SS2 specifically includes:
step SS 21: selecting an extension path, defining a benefit function, and evaluating each extension path, wherein the benefit function of the extension path is described as follows:
Figure FDA0002643619430000022
wherein D issdIs the traffic demand, h, between rack node s and node dsdIs the minimum hop count, sigma, between the node s and the node d in the current network topologyijbijcijRefers to power consumption in the network, i.e., overhead of the network; correspondingly, if an extended path is selected to update the current network topology, the difference between the updated network benefit value and the current network topology benefit value is
Figure FDA0002643619430000031
Step SS 22: searching a set M of the extended path updating activation links, and constructing an optimal network topology structure, specifically comprising:
step SS 221: finding an extended path under the node degree limiting condition, and selecting a link needing to be activated;
step SS 222: and judging whether the selected link can update the network topology according to the network benefit value so as to improve the performance of the network.
5. The method for optimizing the topology of the wireless data center network based on traffic prediction according to claim 4, wherein the step SS221 specifically includes: firstly, judging and selecting a node a with the node degree smaller than a threshold value; secondly, selecting all nodes which are adjacent to the node b and do not establish a link, and selecting the node b of which the node degree is smaller than a threshold value; thirdly, finding all nodes c of the link established with the node b, judging whether the node degree of the node c connected with the node b is smaller than a threshold value, if the node degree of the node c is larger than or equal to the threshold value, reselecting the node b, and if the node degree is smaller than the threshold value, selecting the node c as a node in the extended path; then, a node d which is adjacent to the node c and has no link established with the node c is selected as one node in the extended path, and the selected extended path a-b-c-d is the selected link to be activated.
6. The traffic prediction-based wireless data center network topology optimization method according to claim 4, wherein the step SS222 specifically comprises:
changing the network topology according to the selected expansion path, and changing the state of the link passing through the path along the expansion path, namely changing the activated link into an inactivated link and activating the inactivated link, thereby forming a new network topology;
when a new network topology is formed, judging whether the network topology performance can be improved or not according to the change quantity delta u of the network benefit values before and after updating; if the delta u is larger than u, and u is the updated network benefit value, judging that the selected extended path can improve the performance of the network topology; otherwise, if Δ u is less than or equal to u, then the step SS221 needs to be returned to reselect the extension path;
the algorithm terminates until the node degrees of all nodes in the network topology have reached a threshold.
7. The method for optimizing the topology of the wireless data center network based on traffic prediction according to claim 1, wherein the step SS3 specifically includes:
step SS 31: the service volumes are arranged in a descending order, and the service requirements are processed in sequence from big to small;
step SS 32: aiming at each service, selecting a maximum augmentation path for the service according to a network maximum flow algorithm, wherein the augmentation path is composed of a series of activated and inactivated links, and a starting link and a terminating link are both inactivated links;
step SS 33: calculating the residual capacity of a link in the network and updating the current state of the network;
step SS 34: until no augmented path can be found in the network, the algorithm is terminated;
step SS 35: all data traffic through the network is summed.
8. The wireless data center network topology optimization system based on flow prediction is characterized by comprising:
a data traffic demand generation module to: predicting network service flow in a periodic time period by adopting input parameters to obtain data service demand D between different racks in the networksd
A network topology generation module to: according to the data service demand D between different racks in the network obtained in the data service demand generation modulesdConstructing a network topology with maximum throughput and minimum network overhead;
a data route generation module to: and calculating and distributing the size requests of the data streams passing through each link in sequence to establish a data route based on the network topology obtained by the network topology generating module.
9. The system for optimizing the topology of the wireless data center network based on traffic prediction according to claim 8, wherein the input parameters in the data traffic demand generation module specifically include:
setting a set of racks in a data center network as N, installing an FSO transceiver on each rack, communicating each rack through an FSO link, and setting the node degree of each rack as the maximum number of the FSO links which can be connected;
binary b for the active state of FSO linkijMeaning if the FSO link is active, bij1 is ═ 1; otherwise bij=0;
All the racks have the same node degree, and the node degree of any rack in the network is less than or equal to the number of the transceivers installed on the rack;
the power consumed by the transmitting end of the FSO link is the overhead c required for constructing the FSO linkij
The data flow aggregated on each FSO link cannot exceed the capacity R of the linkij
Data flow f from source chassis node s to destination chassis node dsdOver a link lijData of
Figure FDA0002643619430000051
A routing mode of shunting a plurality of paths is adopted in the network, and the data stream in the network is divided into a plurality of sub-data streams;
the total transmission power consumed by all FSO transceivers in the entire FSO-based wireless data center network topology is the overhead of the network.
10. The system for optimizing the topology of the wireless data center network based on traffic prediction according to claim 9, wherein the data traffic demand generation module obtains the data traffic demand D between different racks in the networksdThe method specifically comprises the following steps: the forecast defines the traffic request quantity between any pair of racks s and D as DsdThen data flow f between each pair of chassis nodessdLess than or equal to service request quantity DsdPredicting the data traffic demand D between different frames at different times in a periodsd
11. The system for optimizing the topology of the wireless data center network based on traffic prediction according to claim 8, wherein the network topology generation module specifically comprises:
selecting an extension path, defining a benefit function, and evaluating each extension path, wherein the benefit function of the extension path is described as follows:
Figure FDA0002643619430000061
wherein D issdIs the traffic demand, h, between rack node s and node dsdIs the current network topologyIn the figure, the minimum number of hops, sigma, between node s and node dijbijcijRefers to power consumption in the network, i.e., overhead of the network; correspondingly, if an extended path is selected to update the current network topology, the difference between the updated network benefit value and the current network topology benefit value is
Figure FDA0002643619430000062
Searching a set M of the extended path updating activation links, and constructing an optimal network topology structure, specifically comprising:
finding an extended path under the node degree limiting condition, and selecting a link needing to be activated;
and judging whether the selected link can update the network topology according to the network benefit value so as to improve the performance of the network.
12. The system according to claim 11, wherein the finding of the extended path under the node degree constraint condition in the network topology generation module selects the link to be activated specifically includes: firstly, judging and selecting a node a with the node degree smaller than a threshold value; secondly, selecting all nodes which are adjacent to the node b and do not establish a link, and selecting the node b of which the node degree is smaller than a threshold value; thirdly, finding all nodes c of the link established with the node b, judging whether the node degree of the node c connected with the node b is smaller than a threshold value, if the node degree of the node c is larger than or equal to the threshold value, reselecting the node b, and if the node degree is smaller than the threshold value, selecting the node c as a node in the extended path; then, a node d which is adjacent to the node c and has no link established with the node c is selected as one node in the extended path, and the selected extended path a-b-c-d is the selected link to be activated.
13. The system for optimizing wireless data center network topology based on traffic prediction according to claim 11, wherein the determining, according to the network benefit value, whether the selected link can update the network topology to improve the performance of the network in the network topology generation module specifically includes:
changing the network topology according to the selected expansion path, and changing the state of the link passing through the path along the expansion path, namely changing the activated link into an inactivated link and activating the inactivated link, thereby forming a new network topology;
when a new network topology is formed, judging whether the network topology performance can be improved or not according to the change quantity delta u of the network benefit values before and after updating; if delta u is larger than u, and u is the updated network benefit value, judging that the selected expansion path can improve the performance of the network topology, otherwise, if delta u is smaller than or equal to u, returning to reselect the expansion path;
the algorithm terminates until the node degrees of all nodes in the network topology have reached a threshold.
14. The system for optimizing the topology of the wireless data center network based on traffic prediction according to claim 8, wherein the data route generation module specifically comprises:
the service volumes are arranged in a descending order, and the service requirements are processed in sequence from big to small;
aiming at each service, selecting a maximum augmentation path for the service according to a network maximum flow algorithm, wherein the augmentation path is composed of a series of activated and inactivated links, and a starting link and a terminating link are both inactivated links;
calculating the residual capacity of a link in the network and updating the current state of the network;
until no augmented path can be found in the network, the algorithm is terminated;
all data traffic through the network is summed.
CN202010847624.2A 2020-08-21 2020-08-21 Wireless data center network topology optimization method and system based on flow prediction Active CN112073983B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010847624.2A CN112073983B (en) 2020-08-21 2020-08-21 Wireless data center network topology optimization method and system based on flow prediction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010847624.2A CN112073983B (en) 2020-08-21 2020-08-21 Wireless data center network topology optimization method and system based on flow prediction

Publications (2)

Publication Number Publication Date
CN112073983A true CN112073983A (en) 2020-12-11
CN112073983B CN112073983B (en) 2022-10-04

Family

ID=73658813

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010847624.2A Active CN112073983B (en) 2020-08-21 2020-08-21 Wireless data center network topology optimization method and system based on flow prediction

Country Status (1)

Country Link
CN (1) CN112073983B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114710439A (en) * 2022-04-22 2022-07-05 南京南瑞信息通信科技有限公司 Network energy consumption and throughput joint optimization routing method based on deep reinforcement learning
CN117499312A (en) * 2023-12-26 2024-02-02 戎行技术有限公司 Network flow management optimization method based on port mapping

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104410582A (en) * 2014-12-10 2015-03-11 国家电网公司 Traffic balancing method for electric power communication network based on traffic prediction
CN104579955A (en) * 2014-12-15 2015-04-29 清华大学 Data center network source routing method and device based on packet granularity
CN107734512A (en) * 2017-09-30 2018-02-23 南京南瑞集团公司 A kind of network selecting method based on the analysis of gray scale relevance presenting levelses
US20190268234A1 (en) * 2018-02-27 2019-08-29 Microsoft Technology Licensing, Llc Capacity engineering in distributed computing systems

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104410582A (en) * 2014-12-10 2015-03-11 国家电网公司 Traffic balancing method for electric power communication network based on traffic prediction
CN104579955A (en) * 2014-12-15 2015-04-29 清华大学 Data center network source routing method and device based on packet granularity
CN107734512A (en) * 2017-09-30 2018-02-23 南京南瑞集团公司 A kind of network selecting method based on the analysis of gray scale relevance presenting levelses
US20190268234A1 (en) * 2018-02-27 2019-08-29 Microsoft Technology Licensing, Llc Capacity engineering in distributed computing systems

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114710439A (en) * 2022-04-22 2022-07-05 南京南瑞信息通信科技有限公司 Network energy consumption and throughput joint optimization routing method based on deep reinforcement learning
CN117499312A (en) * 2023-12-26 2024-02-02 戎行技术有限公司 Network flow management optimization method based on port mapping
CN117499312B (en) * 2023-12-26 2024-03-26 戎行技术有限公司 Network flow management optimization method based on port mapping

Also Published As

Publication number Publication date
CN112073983B (en) 2022-10-04

Similar Documents

Publication Publication Date Title
Liu et al. DRL-R: Deep reinforcement learning approach for intelligent routing in software-defined data-center networks
Sun et al. TIDE: Time-relevant deep reinforcement learning for routing optimization
Osamy et al. An information entropy based-clustering algorithm for heterogeneous wireless sensor networks
CN112073983B (en) Wireless data center network topology optimization method and system based on flow prediction
Ghosh et al. A cognitive routing framework for reliable communication in IoT for industry 5.0
Lei et al. Deep learning based proactive caching for effective wsn-enabled vision applications
Hussain et al. Clonal selection algorithm for energy minimization in software defined networks
Nguyen et al. Efficient virtual network embedding with node ranking and intelligent link mapping
Anandkumar Hybrid fuzzy logic and artificial Flora optimization algorithm-based two tier cluster head selection for improving energy efficiency in WSNs
Mao et al. AdaLearner: An adaptive distributed mobile learning system for neural networks
CN109450587B (en) Spectrum integration processing method, device and storage medium
Zhou et al. Multi-task deep learning based dynamic service function chains routing in SDN/NFV-enabled networks
Saleem et al. Ant based self-organized routing protocol for wireless sensor networks
Cui Research on agricultural supply chain architecture based on edge computing and efficiency optimization
Singh et al. Designing an energy efficient network using integration of KSOM, ANN and data fusion techniques
El Gaily et al. Constrained quantum optimization for resource distribution management
Garg et al. Cluster head selection using genetic algorithm in hierarchical clustered sensor network
WO2017016417A1 (en) System control method and device, controller and control system
Meng et al. Intelligent routing orchestration for ultra-low latency transport networks
Rao et al. An intelligent routing method based on network partition
CN115134928B (en) Wireless Mesh network congestion control method with optimized frequency band route
Almolaa et al. Distributed deep reinforcement learning computations for routing in a software-defined mobile Ad Hoc network
CN113285832B (en) NSGA-II-based power multi-mode network resource optimization allocation method
Xu et al. A Graph reinforcement learning based SDN routing path selection for optimizing long-term revenue
Xuan et al. Deep reinforcement learning-based algorithm for VNF-SC deployment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant