CN116016332A - Distributed congestion control system and method - Google Patents

Distributed congestion control system and method Download PDF

Info

Publication number
CN116016332A
CN116016332A CN202211732567.9A CN202211732567A CN116016332A CN 116016332 A CN116016332 A CN 116016332A CN 202211732567 A CN202211732567 A CN 202211732567A CN 116016332 A CN116016332 A CN 116016332A
Authority
CN
China
Prior art keywords
flowlet
ovs
time
data packet
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211732567.9A
Other languages
Chinese (zh)
Inventor
王�锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Lingyun Chuangxiang Technology Co ltd
Original Assignee
Beijing Lingyun Chuangxiang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Lingyun Chuangxiang Technology Co ltd filed Critical Beijing Lingyun Chuangxiang Technology Co ltd
Priority to CN202211732567.9A priority Critical patent/CN116016332A/en
Publication of CN116016332A publication Critical patent/CN116016332A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention relates to the technical field of data center interconnection, in particular to a distributed congestion control system and a distributed congestion control method. The device comprises a virtual machine, an OVS switch, a boundary router, an OVS flowlet module, a random number generator and a management and control center, wherein the OVS flowlet module is used for assembling flowlets and setting granularity of burst in each flowlet; the random number generator is used for sending the flowlets to different boundary routers to realize load balancing of the flowlets; the control center is used for setting the minimum burst value of the OVS and setting the timeout time of the OVS. The invention relates to the forwarding of the data packet to the flowlet, and distributes the flowlet to a plurality of boundary routers, so that the centralized speed limit on the boundary routers is not needed, and the problem of reduced bandwidth utilization rate of the physical link is solved.

Description

Distributed congestion control system and method
Technical Field
The invention relates to the technical field of data center interconnection, in particular to a distributed congestion control system and a distributed congestion control method.
Background
In a data center interconnect (DCI: datacenter interconnection) scenario, distributed speed limiting has been a challenge. For example, in the DCI interconnection scenario between public cloud and private cloud, as shown in fig. 1, when a virtual machine of the public cloud sends traffic to a virtual machine of the private cloud, since a border router is a node through which the traffic must pass, speed limiting is generally implemented on border routers of the two clouds. Each node typically has a set of border routers with multiple router devices, e.g., 4, from both reliability and performance considerations. When a public cloud has multiple data streams to send to a private cloud, different streams may fall onto different border routers due to the hash policy of the border router. In order to ensure that the traffic sent to the private cloud by each VM (virtual machine) can be accurately limited, OVS (Open vSwitch) selects traffic falling on different border routers, merges the traffic according to the ids of public cloud virtual machines, redirects the merged traffic to a determined border router, and then makes centralized speed limiting on the border router and forwards the traffic to the private cloud.
The existing strategy of centralized speed limiting among a plurality of DCs (data centers) is quite common, and the strategy can ensure the accuracy of speed limiting, but in the actual operation process, certain problems can be generated:
diversion reformation of the OVS to the border router reduces the actual bandwidth between the OVS and the border router, resulting in reduced bandwidth usage of the physical link.
2. When a certain virtual machine sends out a data stream particularly large, a mode of using a stream to a certain boundary router easily causes the router to become a performance bottleneck, particularly if a user uses a small packet, high pps (packet per second, the number of data packets transmitted per second is processed) can cause the boundary router cpu to last 100%, further data packets cannot be processed, and an upper layer application packet loss is caused (packet limiting refers to counting the number of passes of a message on a confirmed cpu and discarding exceeding a threshold; convection limiting refers to counting a stream hashed to the cpu on a certain cpu and discarding a stream exceeding the threshold).
The centralized speed limiting is performed on the border router, essentially, when the border router forwards data to a VM (i.e., a virtual machine) between two different DCs, the border router needs to sense the existence of the VM and generate the above-mentioned problems by limiting the speed of a flow granularity (flow base, one flow is a hash five-tuple data flow) or a packet granularity (packet base), and the flow or the packet is generated at the VM level.
Disclosure of Invention
The invention aims at: the method and the device for distributed congestion control are provided by decoupling the data packet of the VM from the boundary router by utilizing the flowlet phenomenon in the TCP protocol (Transmission Control Protocol, namely the phenomenon that a large amount of data is transmitted at one time by a data transmitting end of a transmission control protocol), relating the forwarding of the data packet to the flowlet, and distributing the flowlet to a plurality of boundary routers according to hash without carrying out centralized speed limiting on the boundary routers.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
a distributed congestion control system comprises a virtual machine, an OVS switch, a boundary router, an OVS flowlet module, a random number generator and a management and control center,
the OVS flowlet module is deployed in an OVS switch and is used for assembling flowlets and setting granularity of burst (TCP sender sudden sending message) in each flowlet;
the random number generator is used for sending the flowlets to different boundary routers to realize load balancing of the flowlets;
the control center is used for setting a minimum burst value in the OVS flowlet module and setting the interval time timeout between two bursts in the OVS flowlet module.
Preferably, the management and control center further comprises a user management interface, wherein the user management interface provides an input/output interface for setting minimum burst values for different tcp window algorithms.
Based on the same conception, there is also provided a method of distributed congestion control, constructing a distributed congestion control system as claimed in claim 1, performing the steps comprising:
s1, initializing an OVS switch, and downloading a limiting value of a virtual machine instance from a management and control center, wherein the limiting value comprises a flowlet_bw_limit of a burst value and an interval time timeout between two bursts;
s2, the OVS exchanger receives a data packet from the virtual machine and records a current_time of a current time stamp;
s3, the OVS switch reads a last_arrival_time hash table in the OVSdb, retrieves the last_time of the last data packet of the current data packet to reach the OVS switch, and only updates the last_arrival_time field if the last_time of the last data packet to reach the OVS switch exists, wherein the current data packet is not a new flow; otherwise, proving that the current data packet belongs to a new stream, and adding a new record into the last_arrival_time hash table;
s4, if the current data packet belongs to a stream which is not a new stream, reading current_time, and judging whether the time interval between the current_time and last_arrival_time is larger than the interval time timeout set in the step S1; if the time interval is greater than the interval time timeout, triggering a new flowlet, otherwise, proving that the data packet belongs to the current flowlet;
s5, if the data packet still belongs to the current flowlet, the data packet is normally sent in a flowlet mode, if the data packet is a new flowlet, all the existing bandwidths of the previous flowlets need to be accumulated, if the total bandwidth of the flowlets is larger than a speed limit value, the data packet is discarded, the newly-built flowlet is abandoned and triggered, and the speed limit is realized.
In summary, due to the adoption of the technical scheme, the beneficial effects of the invention are as follows:
the method changes the strategy of centralized speed limitation among the existing multiple DCs (data centers) by making a centralized speed limitation on the boundary routers, utilizes the flowlet phenomenon in the TCP protocol to correlate the forwarding of the data packet into the flowlet, distributes the flowlet to the multiple boundary routers according to the hash, does not need to make the centralized speed limitation on the boundary routers, simultaneously meets the accuracy of speed limitation, reduces the risk brought by continuous 100% of CPU of the boundary routers, solves the problem of reduced bandwidth utilization rate of a physical link, and fully utilizes the physical bandwidth.
Drawings
FIG. 1 is a DCI interconnection scenario between public and private clouds;
FIG. 2 is a schematic diagram of a combination of flowlet messages in example 1;
FIG. 3 is a graph showing the relationship between transmission data of tcp and time in example 1;
fig. 4 is a diagram of a distributed congestion control apparatus composition configuration in embodiment 1;
FIG. 5 is a flow chart of implementing current limiting in a management center in example 1;
FIG. 6 is a bell-shaped function of the Fourier transform in example 1.
Detailed Description
The present invention will be described in detail with reference to the accompanying drawings.
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Example 1
The transmission control protocol (TCP, transmission Control Protocol) is a connection-oriented, reliable, byte stream based transport layer communication protocol defined by IETF RFC 793.
In order to ensure reliable transmission of messages, TCP gives each packet a sequence number, and the sequence number also ensures sequential reception of packets transmitted to the receiving entity. Then the receiving end entity sends back a corresponding Acknowledgement (ACK) to the successfully received bytes; if the sender entity does not receive an acknowledgement within a reasonable Round Trip Time (RTT), the corresponding data (assuming lost) will be retransmitted.
In terms of flow control and congestion control, the method is realized by adopting a sliding window and adjusting the size of the sliding window through an algorithm.
(1) Sliding window protocol
The TCP sliding window technique regulates the transfer of data between two hosts by dynamically changing the window size. Window sliding protocol is a flow control method used by TCP. The protocol allows a sender to send multiple packets in succession before stopping and waiting to receive acknowledgement messages. This protocol may speed up the transmission of data since the sender does not have to stop waiting for acknowledgement every packet. Only when the receiving window is slid forward (and at the same time an acknowledgement is sent) is the transmitting window likely to be slid forward. The windows at the transmitting and receiving ends continuously slide forward according to the rule, so that the protocol is also called sliding window protocol.
(2) Congestion control algorithm
Congestion control is mainly four algorithms: 1) slow start, 2) congestion avoidance, 3) congestion occurrence, 4) fast recovery. These algorithm parameters may be set by the application layer or at the server operating system kernel.
The study of flowlets stems from the special burstiness phenomenon in TCP transmission. In 1986 r.jain et al studied the TCP performance in the MIT campus network and found that the TCP data stream was not as smooth as the water stream, but rather that there was a period of idle between one burst (burst messaging) and the other burst. They give this phenomenon a very visual name: train (translated into something that is being queued, not a train), the data messages sent by the virtual machine are sent just as if they were loaded in a section of a train.
Interpretation of Flowlet: flowlets consist of flow + root-let. The Flow refers to a TCP Flow, and is a TCP data transmission channel fixed by a source-destination ip address, a source-destination port and a protocol. flowlet refers to the next level of flow. A TCP flow long connection may last more than a minute, while one of the flowlets is a short send, with only a few microseconds.
The finer the data stream is split, the more distinct the small bursts (or clusters of a group of data packets) are within the same time interval. For example, a normal one is 250ms, with 1ms intervals to send the next flow. Then within 1005ms, 4 flows can be seen. When one of the flows is sliced in units of microseconds, it can be seen that the frequency of packet transmission in this flow is not uniform. While small bursts (clusters of a set of packets) are observed in microseconds in a flow, which is a flowlet.
A flowlet is a concept between hash quintuple (i.e., a stream) and a packet, and is a combination of messages in a burst in the same stream. A schematic diagram of a flowlet message combination is shown in fig. 2, where δ is the idle time in two flowlets, which are combined as in the prior art.
The flowlet can accurately control the use condition of bandwidth, fully occupy the physical bandwidth channel as much as possible, and meanwhile, as each flowlet starts to be sent, the previous flowlet reaches the receiving end, so that the problem of tcp disorder cannot be caused (a switch easily causes tcp disorder in the process of limiting speed based on messages, namely, a plurality of tcp messages of one stream are distributed to different devices and different ports of a group of boundary routers, and the paths of travel are different, so that the problem of tcp disorder is caused.
For example, as shown in fig. 3, a graph of transmission data of tcp versus time, in which the vertical axis is the seq number of tcp, can be regarded as the transmitted data; the horizontal axis is time. And (4) by data packet grabbing, a tcp congestion algorithm is used as a cubic. As is evident from fig. 3, the presence of a burst duration and an intermediate idle time will first send a burst, then an idle time, and then continue to send. The flow rate is gradually increased over time.
Therefore, if the data flow sent by each virtual machine can be integrated into a group of flowlets, the speed limit of each virtual machine flow can be realized on the OVS by using the design of the application of the sent data, the design of the window size of the receiving end, the design of the implementation of the kernel tcp, or selecting different tcp congestion algorithms, and the like, and the speed limit of each virtual machine flow can be realized without sensing physical layer network equipment and links.
OVS is a translation of open virtual swich, i.e. an open virtualized switch, and is made up of more than one OVS module.
open vswitch (OVS switch) is a virtualization that provides a network.
There are many OVS modules, such as iptables modules in the figures, that are used to make packets into and out of the whitelist.
Hereinafter "OVS module" is a flowlet module placed in a OVS switch as described herein only.
Openvswitch is abbreviated as OVS, which is a high quality, multi-layered virtual switching software, as described by its functional network (http:// OpenvSwitch. Org /). Its purpose is to support large-scale network automation through programming extensions while also supporting standard management interfaces and protocols.
With the popularization of virtualization applications, more virtualization switches need to be deployed, but the expensive closed source virtual switch is embarrassing for users, and the multi-layer virtualization software switch Open vSwitch is developed by Nicira Networks, and the main implementation code is a portable C code. The method follows Apache 2.0 open source code copyright protocol, can be used in production environment, supports cross-physical server distributed management, expanded programming, large-scale network automation and standardized interfaces, and realizes a software switch with similar functions as most commercial closed source switches.
The OVS architecture is divided into three layers, kernel space, user space and configuration management space, respectively. The datapath of the kernel space is responsible for fast switching of packets according to the flow table; the vswitch module of the user space supports various network protocols and configures the switch according to information stored in an OVSdb (function for storing time sequence data in an OVS flowlet module, called OVSdb); the configuration management layer provides tools for the user such as the configuration of the switch kernel, addition and deletion of the OVSdb database and the like.
Based on the technical proposal, the new system architecture for controlling congestion is formed by adding an OVS flowlet module, a random number generator and a management and control center on the basis of the original DCI interconnection scene as shown in figure 4,
1.OVS flowlet module: the system is deployed in the OVS module, can be used as an extension application of the original OVS, and can also be an independent module. For assembling the flowlets and setting the granularity of the burst in each flowlet.
2. And the random number generator is used for carrying out distributed random transmission on the flowlets to different boundary routers, so as to realize load balancing on the flowlets.
3. And (3) a management and control center: the method is used for associating the virtual machine instance ID from the virtualization management platform, obtaining input from a user management interface, setting the minimum burst value of the OVS, and setting the timeout time (i.e. idle time) of the OVS.
4. User management interface: an input/output interface is provided for setting the minimum burst value for different tcp window algorithms.
The implementation flow is as follows:
0. and (3) initializing the OVS, and downloading a limiting value of each virtual machine instance from a management and control center, wherein the limiting value comprises a burst value flowlet_bw_limit and an interval time timeout between two bursts.
1. The OVS receives a packet from the virtual machine and records the current_time of the current timestamp.
The ovs reads the last_arrival_time hash table in OVSdb and retrieves the last packet arrival time of the stream to which this packet belongs to the switch. If the record exists, the stream is not a new stream, and only the last_arrival_time field is updated; otherwise, it is proved that the data packet belongs to a new stream, and a new record is added to the last_arrival_time hash table.
3. If the stream is not a new stream, the current_time is read, and it is determined whether the time interval between the current_time and the last_arrival_time is greater than the timeout value timeout currently set by the OVS. If the time interval is greater than timeout, a new flowlet is triggered. Otherwise, it is proved that the data packet still belongs to the current flowlet.
4. If the data packet belongs to the current flowlet, the data packet is normally processed and normally sent in the flowlet mode. If the new flow is the new flow, the existing bandwidths of all the previous flows need to be accumulated, if the total bandwidth of the flow of the sum sigma f (n+1) of the existing flows is larger than a speed limit value (sigma f (n+1) > flow_bw_limit), the data packet is discarded, and the new flow is abandoned to be triggered, so that the speed limit is realized.
The core invention point of the scheme is that the OVS flowlet module is used for assembling flowlets and setting granularity of burst in each flowlet, so that the problem of low bandwidth utilization rate is solved.
What calls for the low utilization:
A. if there are multiple lines between the two environments, each line has a theoretical maximum in the case of redundancy, for example:
in the case of two lines between the a-cloud and the B-cloud, the traffic of the two lines using the load balancing policy (ECMP) is 50% at maximum, and if it exceeds 50%, the redundancy function is not achieved. If the utilization ratio of the two lines exceeds 50%, when one line is broken, the flow of the broken line is adjusted to the other line, and the total bandwidth exceeds 100%, so that packet loss is generated instantaneously. The essential reason is that ECMP routing algorithms based on are not able to efficiently handle topology and traffic asymmetries. Topology asymmetry is mainly due to faults, such as a line break in the example.
But in practice 50% is not achieved because there is flow asymmetry in addition to topology asymmetry. Namely, there are two-eight principles for data exchange between multiple ips on the A, B environment (assuming that A, B environments each have 100 ip addresses, where ips are exchanged between the two environments). 20% of the IP contributes 80% of the traffic. For example, under the FTP protocol, unidirectional traffic may continue (long connections) on one defined line even though there is little traffic on the other line.
Based on the control mode of the flowlet, namely by using the characteristics of the flowlet, after the long connection accounting for 80% of the flow is broken up by a controller, the long connection is uniformly distributed on different lines, and the effort is close to the theoretical utilization rate of 50%.
There are multiple lines with unequal bandwidths between public cloud and private cloud: as shown in fig. 4, the total of 1/2/3/4 lines, the 4 lines may be 10G, 1G bandwidth optical fibers, 100M internet VPN, or even 10M class 4G wireless network. How these bandwidths are fully utilized is as follows:
first the management centre sets a parameter X, which is associated with the bandwidth of the line.
X: the threshold parameter, X, is incremented by the packet size (in bytes) for each packet sent over the link and is periodically decremented by a multiplication factor α between 0 and X-X (1- α). From a period T, a fourier transformed bell-shaped function is approximated, as shown in fig. 6.
The calculation formula of the threshold parameter X is as follows:
X=β*R;
t: the period parameter, for example, is measured every T time periods (e.g., a common 95 billing, i.e., every 5 minutes, T is typically in the order of ms for a switch, e.g., t=1 ms).
Alpha: a multiplication factor between 0 and 1, or referred to as a random parameter, is generated by the controller.
Beta=t/alpha. Beta is a first order low pass filter of the system, representing the frequency response between the period T and the random parameter alpha.
R: the current traffic on a certain line, for example, the bandwidth of the 10G optical fiber is 10G, the traffic at time t is 8.6Gbps, and this 8.6Gbps is R.
Q: congestion parameters. Directly acquiring SDN switch flow table items. For example, executing dpptl dump-flows on mini may see flow table information in current SDN controllers. Q is an integer between 0 and 10. 0 represents that the link is empty and 10 represents that the link congestion reaches 100%.
t: the flowlet inactivity time, i.e., timeout time. The Timeout is an empirical value obtained through multiple experiments. The current test is timeout=300 us to 1ms.
The core of the invention is to find a threshold parameter X so that the congestion parameter Q is as large as possible. Q is a result. After a number of measurements, an objective function similar to such an approximation is represented by a fourier series:
Figure DEST_PATH_IMAGE002
the system performance of the invention can be stabilized using the following parameters: q=3 to 6, β=100 us to 500us, t=300 us to 1ms.
Default parameter values are: q=3, β=160 us, t=500 us.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.

Claims (3)

1. A distributed congestion control system comprises a virtual machine, an OVS switch, a boundary router, an OVS flowlet module, a random number generator and a management and control center,
the OVS flowlet module is deployed in the OVS switch and is used for assembling flowlets and setting granularity of burst in each flowlet;
the random number generator is used for sending the flowlets to different boundary routers to realize load balancing of the flowlets;
the control center is used for setting a minimum burst value in the OVS flowlet module and setting the interval time timeout between two bursts in the OVS flowlet module.
2. A distributed congestion control system according to claim 1, wherein the management centre further comprises a user management interface providing an input/output interface for setting the minimum burst value for different tcp window algorithms.
3. A method of distributed congestion control as claimed in claim 1, wherein the steps of constructing a distributed congestion control system, and performing comprise:
s1, initializing an OVS switch, and downloading a limiting value of a virtual machine instance from a management and control center, wherein the limiting value comprises a flowlet_bw_limit of a burst value and an interval time timeout between two bursts;
s2, the OVS exchanger receives a data packet from the virtual machine and records a current_time of a current time stamp;
s3, the OVS switch reads a last_arrival_time hash table in the OVSdb, retrieves the last_time of the last data packet of the current data packet to reach the OVS switch, and only updates the last_arrival_time field if the last_time of the last data packet to reach the OVS switch exists, wherein the current data packet is not a new flow; otherwise, proving that the current data packet belongs to a new stream, and adding a new record into the last_arrival_time hash table;
s4, if the current data packet belongs to a stream which is not a new stream, reading current_time, and judging whether the time interval between the current_time and last_arrival_time is larger than the interval time timeout set in the step S1; if the time interval is greater than the interval time timeout, triggering a new flowlet, otherwise, proving that the data packet belongs to the current flowlet;
s5, if the data packet still belongs to the current flowlet, the data packet is normally sent in a flowlet mode, if the data packet is a new flowlet, all the existing bandwidths of the previous flowlets need to be accumulated, if the total bandwidth of the flowlets is larger than a speed limit value, the data packet is discarded, the newly-built flowlet is abandoned and triggered, and the speed limit is realized.
CN202211732567.9A 2022-12-30 2022-12-30 Distributed congestion control system and method Pending CN116016332A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211732567.9A CN116016332A (en) 2022-12-30 2022-12-30 Distributed congestion control system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211732567.9A CN116016332A (en) 2022-12-30 2022-12-30 Distributed congestion control system and method

Publications (1)

Publication Number Publication Date
CN116016332A true CN116016332A (en) 2023-04-25

Family

ID=86026459

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211732567.9A Pending CN116016332A (en) 2022-12-30 2022-12-30 Distributed congestion control system and method

Country Status (1)

Country Link
CN (1) CN116016332A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009130759A (en) * 2007-11-27 2009-06-11 Fujitsu Ltd Line communication method and apparatus
US20170230298A1 (en) * 2016-02-09 2017-08-10 Flowtune, Inc. Network Resource Allocation
CN108243111A (en) * 2016-12-27 2018-07-03 华为技术有限公司 The method and apparatus for determining transmission path
CN110061929A (en) * 2019-03-10 2019-07-26 天津大学 For data center's load-balancing method of asymmetrical network
US20220321469A1 (en) * 2021-03-30 2022-10-06 Amazon Technologies, Inc. Dynamic routing for peered virtual routers

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009130759A (en) * 2007-11-27 2009-06-11 Fujitsu Ltd Line communication method and apparatus
US20170230298A1 (en) * 2016-02-09 2017-08-10 Flowtune, Inc. Network Resource Allocation
CN108243111A (en) * 2016-12-27 2018-07-03 华为技术有限公司 The method and apparatus for determining transmission path
CN110061929A (en) * 2019-03-10 2019-07-26 天津大学 For data center's load-balancing method of asymmetrical network
US20220321469A1 (en) * 2021-03-30 2022-10-06 Amazon Technologies, Inc. Dynamic routing for peered virtual routers

Similar Documents

Publication Publication Date Title
Peter L An introduction to computer networks
US7606147B2 (en) Application aware traffic shaping service node positioned between the access and core networks
US7719966B2 (en) Network element architecture for deep packet inspection
US8773999B2 (en) Distributed chassis architecture having integrated service appliances
US8320242B2 (en) Active response communications network tap
Hafeez et al. Detection and mitigation of congestion in SDN enabled data center networks: A survey
US20070280277A1 (en) Method and system for adaptive queue and buffer control based on monitoring in a packet network switch
US20130286845A1 (en) Transmission rate control
US11070386B2 (en) Controlling an aggregate number of unique PIM joins in one or more PIM join/prune messages received from a PIM neighbor
US10693814B2 (en) Ultra-scalable, disaggregated internet protocol (IP) and ethernet switching system for a wide area network
Veeraraghavan et al. CHEETAH: Circuit-switched high-speed end-to-end transport architecture
Mliki et al. A comprehensive survey on carrier ethernet congestion management mechanism
CN101227495B (en) Common telecommunication grouping data network system and congestion control method thereof
CN113438182A (en) Flow control system and flow control method based on credit
Molina et al. Managing path diversity in layer 2 critical networks by using OpenFlow
CN116016332A (en) Distributed congestion control system and method
US20210281524A1 (en) Congestion Control Processing Method, Packet Forwarding Apparatus, and Packet Receiving Apparatus
Cisco Frame Relay Commands
Cisco Frame Relay Commands
Cisco Frame Relay Commands
Cisco Frame Relay Commands
Cisco Frame Relay Commands
Cisco Frame Relay Commands
Cisco Interface Configuration Commands
Minkenberg et al. Adaptive routing for convergence enhanced Ethernet

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination