CN112152936B - Intra-network control for explicit rate computation - Google Patents

Intra-network control for explicit rate computation Download PDF

Info

Publication number
CN112152936B
CN112152936B CN201910678786.5A CN201910678786A CN112152936B CN 112152936 B CN112152936 B CN 112152936B CN 201910678786 A CN201910678786 A CN 201910678786A CN 112152936 B CN112152936 B CN 112152936B
Authority
CN
China
Prior art keywords
link
rate
data
component subsystem
component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910678786.5A
Other languages
Chinese (zh)
Other versions
CN112152936A (en
Inventor
蔡维德
蔡维纲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Tiande Xinlian Information Technology Co ltd
Tianmin Qingdao International Sandbox Research Institute Co ltd
Zeu Crypto Networks Inc
Original Assignee
Qingdao Tiande Xinlian Information Technology Co ltd
Tianmin Qingdao International Sandbox Research Institute Co ltd
Zeu Crypto Networks Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Tiande Xinlian Information Technology Co ltd, Tianmin Qingdao International Sandbox Research Institute Co ltd, Zeu Crypto Networks Inc filed Critical Qingdao Tiande Xinlian Information Technology Co ltd
Priority to CN201910678786.5A priority Critical patent/CN112152936B/en
Publication of CN112152936A publication Critical patent/CN112152936A/en
Application granted granted Critical
Publication of CN112152936B publication Critical patent/CN112152936B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/20Traffic policing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/25Flow control; Congestion control with rate being modified by the source upon detecting a change of network conditions

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The present invention is in-network control for explicit rate computation. Explicit rates are calculated by intra-network control to maximize the overall throughput of the network system and/or to minimize network latency. The component subsystems of the in-network control system maintain and manage explicit rate tables for multiple streams over one data link. The component subsystem may maintain and manage a rate table of multiple data streams over one data link. The component subsystem may receive a message containing an explicit rate for a certain flow. The received explicit rate may be sent from one endpoint of the stream, or from a component subsystem of the in-network control system. The explicit rate may be carried in a FC (Flow control) packet or message. The FC packets may be sent over a separate data channel, or path, which may be different from the path of the flow.

Description

Intra-network control for explicit rate computation
Technical Field
The present invention relates to a method, system and protocol for an in-network control system for a data network, and more particularly to an in-network control mechanism aimed at maximizing the overall throughput of the network system and minimizing the application delay on the network system.
Background
Traffic engineering (Traffic engineering, TE) refers to optimizing the performance of a data network by dynamically adjusting the routing and behavior of network data flows. Thus, TE systems are control systems that optimize the performance of the data network. At the time of writing this text, TE methods of the network industry focus on alleviating congestion and load balancing. In addition, conventional TE methods optimize network performance from an end-to-end or partial network perspective. In contrast, the present invention aims to optimize the throughput and delay of the whole system by intra-network control.
In many current TE systems, feedback (observation) and control is limited to being done at the network edge. This limitation artificially inhibits the performance of TE systems. Since congestion and packet loss typically occur inside the network, such events should also be detected and corrective actions taken in time inside the network. While this is clear, on the other hand, the network industry adheres to the end-to-end principle and gives up better methods for up to about 30 years. Recently, with google, microsoft and amazon et al have introduced new SDN (software-defined networking) technologies, and this trend has begun to reverse. In SDN sports, control centralization and in-network control play an important role.
Some SDN advocates have begun to review "ancient" technologies such as circuit-switched, multi-stream optimization, and maximum minimum (maxmin) rate allocation. The design of these techniques ignores the end-to-end principle.
The end-to-end principle is a business strategy for developing the internet. Its usefulness is now functioning. The internet is now considered a complex control system and the control principle is applied. SDN initiatives believe that the internet requires a new design from scratch. The correct new design should use the control theory of elapsed time test. The present invention aims to redesign the TE system of the internet by controlling the computational explicit rate within the network to maximize the overall throughput of the network system and/or to minimize network delay; hereinafter, a stream (stream) or a data stream (flow) in a network system is a sequence of data packets from one end point to another end point. Further, the terms stream and data stream are used interchangeably.
Disclosure of Invention
Aspects of the present invention relate to providing systems, protocols, and methods for in-network control to maximize the overall throughput of a network system and/or to minimize network latency on a network system.
In one embodiment, an in-network control system of a network system includes a plurality of component subsystems distributed throughout the network system. In one embodiment, a component subsystem of an in-network control system maintains and manages an explicit rate table (in data units/second) for a plurality of streams over a data link, where a stream is a sequence of end-point to end-point packets; at least one rate is maintained per flow. In addition, the component subsystem may maintain and manage a measured data rate table (expressed in data units/time units) for multiple streams over one data link.
In addition, the component subsystem may receive a message containing an explicit rate of a stream of packets from one endpoint to another-the explicit rate is the rate at which the sender of the stream intends to send to the recipient. The received explicit rate may be sent from one endpoint of the stream, or from a component subsystem of the in-network control system. The explicit rate may be carried in a FC (Flow control) packet or message. The FC packets may be sent over a separate data channel, or path, which may be different from the path of the flow. The FC packet may be sent from one endpoint of the flow or from a component subsystem of the in-network control system. The same path may also be used for FC packets of a data stream and regular packets of a data stream. The component subsystem is in a location near the data link through which the stream passes.
In one embodiment, the component subsystem calculates and maintains a link rate (in data units/time units) for a data link in the network system. In one embodiment, the link rate of the data link is calculated (or updated) as follows: the bandwidth available to a link is subtracted from the sum of the data rates (in data units per time unit) of all constrained flows on the link divided by the number of unconstrained flows on the link. Wherein the constraint flow determination is as follows: a data stream is constrained on a link if it passes through the link and the data rate (or explicit rate thereof) measured at the link is less than the link rate of the link maintained by the component subsystem, otherwise the data stream is unrestricted on the link.
For a data link, the link rate is calculated using the following formula: r_link= (c_link-f_c)/(N-n_c), where r_link is the link rate, c_link is the link available bandwidth or link capacity, f_c is the rate sum of all constrained data streams of the link, N is the total number of data streams flowing through the link, and n_c is the total number of constrained data streams flowing through the link.
In a specific example, the link rate of the data link is used as a target (or reference) data rate for all unconstrained data streams on the link. In one embodiment, the calculation of the link rate for the data link is used to maintain fairness among all unconstrained data flows on the link.
In one embodiment, the in-network control system includes a plurality of controllers located near or connected to the network boxes (e.g., network devices, switches, routers, switch-routers, etc.), wherein the controllers are responsible for calculating the link rates of all data links connected to the network boxes. For one link, such a controller is referred to as a link controller. In one specific example, the in-network control system includes a controller box (a box implementing a link controller) connected to the network system. The controller box may not contact or process data packets of the data user plane (or conventional).
In one particular example, the component subsystem may modify the explicit rate carried in the FC packet or message. In another specific example, a component subsystem or link controller may send a special FC packet or message to the sender of one data stream.
In one embodiment, the link controller is placed near or connected to a link whose 1 hour utilization exceeds 50% during the day. In another specific example, the link controller is placed in proximity to or connected to wireless transmission devices such as cellular base stations, DAS (Distributed antenna system ) devices, wi-Fi devices, and the like. In another specific example, the link controller is located near or connected to an SDN (Software-defined network) or NFV (Network function virtualization) box. In some embodiments, the link controller is implemented as software disposed in a network box (such as a network device, switch, router, switch-router, etc.). Such a network box may implement SDN/NFV functionality.
In another specific example, a sender of an endpoint pair sends an FC packet or message to a receiver, the FC packet/message enabling an in-network control system to calculate the explicit rate of the endpoint pair. In one specific example, the FC packets/messages for an endpoint pair contain a calculated explicit rate for the endpoint pair.
In one embodiment, an in-network control system maintains a minimum initial rate table for a set of endpoint pairs in a network system; the endpoint pair is allowed to start a new data session at such minimum initial rate. In one particular example, once the in-network control system receives a request for a new connection between two endpoints, the control system calculates an explicit rate of the new connection, which may be different from the minimum initial rate of the endpoint pair maintained by the control system. In addition, the calculated explicit rate is sent to the sender of the endpoint pair.
Drawings
Fig. 1 is a schematic diagram of in-network control for explicit rate calculation according to an embodiment of the present invention.
Detailed Description
The invention will now be further described by way of example with reference to fig. 1, but in no way limit the scope of the invention.
The in-network control system of the network system includes a plurality of component subsystems distributed throughout the network system as shown in fig. 1. A component subsystem of the in-network control system maintains and manages an explicit rate table (expressed in data units/second) of a plurality of streams, which are sequences of packets from one endpoint to another endpoint, across each data link; at least one rate is maintained per flow. In addition, a component subsystem can maintain and manage a table of measured data rates (in units of data/time) for multiple streams over a data link.
The component subsystem in fig. 1 may receive a message containing the explicit rate of a stream of packets from one endpoint to another-the explicit rate is the rate at which the sender of the stream intends to send to the recipient. The received explicit rate may be sent from one endpoint of the stream, or from a component subsystem of the in-network control system. The explicit rate may be carried in a FC (Flow control) packet or message. The FC packets may be sent over a separate data channel, or path, which may be different from the path of the flow. The FC packet may be sent from one endpoint of the flow or from a component subsystem of the in-network control system. The same path may also be used for FC packets of a data stream and regular packets of a data stream. The component subsystem is in a location near the data link through which the stream passes.
The component subsystem in fig. 1 calculates and maintains the link rate (in data units/time units) of a data link in the network system. The link rate of the data link is calculated (or updated) as follows: the bandwidth available to a link is subtracted from the sum of the data rates (in data units per time unit) of all constrained flows on the link divided by the number of unconstrained flows on the link. Wherein the constraint flow determination is as follows: a data stream is constrained on a link if it passes through the link and the data rate (or explicit rate thereof) measured at the link is less than the link rate of the link maintained by the component subsystem, otherwise the data stream is unrestricted on the link.
For one data link in fig. 1, the link rate is calculated with the following formula:
link rate =ab) /(c d)
Wherein the method comprises the steps ofaIs the available bandwidth of the link or the link capacity,bis the sum of the rates of all the constrained data streams of the link,cis the total number of data streams flowing through the link,dis the total number of constrained data streams flowing over the link.
The link rate of the data link in fig. 1 is used as a target (or reference) data rate for all unconstrained data streams on the link. The calculation of the link rate for the data link serves to maintain fairness among all unconstrained data streams on the link.
The in-network control system in fig. 1 includes a plurality of controllers located near or connected to a network box (e.g., network device, switch, router, switch-router, etc.), wherein the controllers are responsible for calculating the link rates of all data links connected to the network box. For one link, such a controller is referred to as a link controller. In one specific example, the in-network control system includes a controller box (a box implementing a link controller) connected to the network system. The controller box may not contact or process data packets of the data user plane (or conventional).
The component subsystem in fig. 1 may modify the explicit rate carried in FC packets or messages. In another specific example, a component subsystem or link controller may send a special FC packet or message to the sender of one data stream.
The link controller in fig. 1 is placed near or connected to a link whose 1 hour utilization exceeds 50% during the day. The link controller is placed in proximity to or connected to wireless transmission devices such as cellular base stations, DAS (Distributed antenna system ) devices, wi-Fi devices, and the like. In another specific example, the link controller is located near an SDN (Software-defined network) or NFV (Network function virtualization ) box, or connected to these devices. In some embodiments, the link controller is implemented as software disposed in a network box (such as a network device, switch, router, switch-router, etc.). Such a network box may implement SDN/NFV functionality.
The sender of an endpoint pair sends an FC packet or message to the receiver, which enables the in-network control system to calculate the explicit rate of the endpoint pair. In one specific example, the FC packets/messages for an endpoint pair contain a calculated explicit rate for the endpoint pair.
The in-network control system of fig. 1 maintains a minimum initial rate table for a set of endpoint pairs in the network system; the endpoint pair is allowed to start a new data session at such minimum initial rate. In one particular example, once the in-network control system receives a request for a new connection between two endpoints, the control system calculates an explicit rate of the new connection, which may be different from the minimum initial rate of the endpoint pair maintained by the control system. In addition, the calculated explicit rate is sent to the sender of the endpoint pair.

Claims (3)

1. An in-network control system for explicit rate computation, comprising a plurality of component subsystems distributed over a network system, the plurality of component subsystems comprising:
a first component subsystem for maintaining a set of measured data rate tables, the data rate tables being expressed in units of data/time, each rate corresponding to a stream of data packets flowing from one endpoint to another endpoint over a data link, both endpoints being connected to the network system;
a second component subsystem for maintaining and calculating a first link rate, the first link rate expressed in units of data/time, the first link rate being a target data rate for a data stream flowing through the link; wherein, the liquid crystal display device comprises a liquid crystal display device,
sending and receiving feedback flow control packets or messages between the first component subsystem and the second component subsystem, or between the two endpoints, or between the first component subsystem/the second component subsystem and the two endpoints;
the first link rate on one of the links is calculated as: the total bandwidth available on the link, minus the sum of the data rates of all data streams constrained on the link divided by the number of all data streams unconstrained on the link, with one constrained data stream passing through the link and the rate of the data stream being less than the link rate of the link;
a subset of the first component subsystem maintains a minimum rate table for a plurality of endpoint pairs, each endpoint connected to a network system; the second component subsystem does not process data of any user plane in the network system.
2. An in-network control system for explicit rate computation, comprising a plurality of component subsystems distributed over a network system, the plurality of component subsystems comprising:
a third component subsystem for maintaining a set of explicit rate tables expressed in units of data/time, each rate corresponding to a packet flow from one endpoint to the other endpoint over a data link, both endpoints being connected to the network system;
a fourth component subsystem for maintaining and calculating a second link rate, the second link rate expressed in units of data/time, the second link rate being a target data rate for a data stream passing through the link; and
sending and receiving feedback flow control packets or messages between the third component subsystem and the fourth component subsystem, or between two endpoints, or between the third component subsystem/fourth component subsystem and an endpoint;
the second link rate on one of the links is calculated as: the total bandwidth available on the link, minus the sum of the data rates of all data streams constrained on the link divided by the number of all data streams unconstrained on the link, with one constrained data stream passing through the link and the rate of the data stream being less than the link rate of the link;
wherein a subset of the third component subsystem maintains a minimum rate table of a plurality of endpoint pairs, each endpoint connected to the network system; the fourth component subsystem does not process data of any user plane in the network system.
3. An in-network control system for explicit rate computation comprising a plurality of component subsystems disposed near or connected to a data link, the data link having a utilization of greater than 50% for more than 1 hour a day, the plurality of component subsystems comprising:
a fifth component subsystem for maintaining a set of explicit rate tables expressed in units of data/time, each rate corresponding to a packet flow from one endpoint to the other endpoint over a data link, both endpoints being connected to the network system;
a sixth component subsystem for maintaining and calculating a third link rate, the third link rate expressed in units of data/time, the third link rate being a target data rate for a data stream passing through the link; and
sending and receiving feedback flow control packets or messages between the fifth component subsystem and the sixth component subsystem, or between two endpoints, or between the fifth component subsystem/sixth component subsystem and an endpoint;
the third link rate on one of the links is calculated as: the total bandwidth available on the link, minus the sum of the data rates of all data streams constrained on the link divided by the number of all data streams unconstrained on the link, with one constrained data stream passing through the link and the rate of the data stream being less than the link rate of the link;
wherein a subset of the fifth component subsystem maintains a minimum rate table of a plurality of endpoint pairs, each endpoint connected to the network system; the sixth component subsystem does not process data of any user plane in the network system.
CN201910678786.5A 2019-07-25 2019-07-25 Intra-network control for explicit rate computation Active CN112152936B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910678786.5A CN112152936B (en) 2019-07-25 2019-07-25 Intra-network control for explicit rate computation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910678786.5A CN112152936B (en) 2019-07-25 2019-07-25 Intra-network control for explicit rate computation

Publications (2)

Publication Number Publication Date
CN112152936A CN112152936A (en) 2020-12-29
CN112152936B true CN112152936B (en) 2023-09-12

Family

ID=73892087

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910678786.5A Active CN112152936B (en) 2019-07-25 2019-07-25 Intra-network control for explicit rate computation

Country Status (1)

Country Link
CN (1) CN112152936B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1339209A (en) * 1999-10-02 2002-03-06 三星电子株式会社 Fair flow controlling method in packet networks
CN104243240A (en) * 2014-09-23 2014-12-24 电子科技大学 SDN (self-defending network) flow measuring method based on Open Flow
CN104734987A (en) * 2013-12-19 2015-06-24 上海宽带技术及应用工程研究中心 System and method for managing flow in software defined network
CN105847151A (en) * 2016-05-25 2016-08-10 安徽大学 Multi-constrained QoS (Quality of Service) routing strategy designing method for software defined network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9584418B2 (en) * 2013-10-10 2017-02-28 International Business Machines Corporation Quantized congestion notification for computing environments

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1339209A (en) * 1999-10-02 2002-03-06 三星电子株式会社 Fair flow controlling method in packet networks
CN104734987A (en) * 2013-12-19 2015-06-24 上海宽带技术及应用工程研究中心 System and method for managing flow in software defined network
CN104243240A (en) * 2014-09-23 2014-12-24 电子科技大学 SDN (self-defending network) flow measuring method based on Open Flow
CN105847151A (en) * 2016-05-25 2016-08-10 安徽大学 Multi-constrained QoS (Quality of Service) routing strategy designing method for software defined network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ATM网络显速率控制算法;于广妍 等;《数字通信》;19970430;概述、第一节 *

Also Published As

Publication number Publication date
CN112152936A (en) 2020-12-29

Similar Documents

Publication Publication Date Title
EP3618372B1 (en) Congestion control method and network device
JP5010739B2 (en) Method and system for aggregate bandwidth control
Ramaboli et al. Bandwidth aggregation in heterogeneous wireless networks: A survey of current approaches and issues
EP3422646B1 (en) Method and device for multi-flow transmission in sdn network
CA2699325C (en) Method, system, and computer program product for adaptive congestion control on virtual lanes for data center ethernet architecture
CN109787921B (en) CDN bandwidth scheduling method, acquisition and scheduling server and storage medium
AU2010221770B2 (en) Method and system for I/O driven rate adaptation
US7889669B2 (en) Equalized network latency for multi-player gaming
CN103476062B (en) Data flow scheduling method, equipment and system
CN109787801B (en) Network service management method, device and system
US20130258847A1 (en) Congestion Control and Resource Allocation in Split Architecture Networks
CN109818881B (en) CDN bandwidth scheduling method, acquisition and scheduling server and storage medium
US10873529B2 (en) Method and apparatus for low latency data center network
KR20040023719A (en) Method for supporting non-linear, highly scalable increase-decrease congestion control scheme
CN108989237B (en) Method and device for data transmission
KR20160036878A (en) Apparatus and method for controlling data flow in a communication system
US7599399B1 (en) Jitter buffer management
CN104601488A (en) Flow control method and device in SDN (software defined network)
CN112152936B (en) Intra-network control for explicit rate computation
US9693282B2 (en) Control method, controller and packet processing method for software-defined network
Halepoto et al. Management of buffer space for the concurrent multipath transfer over dissimilar paths
WO2017145962A1 (en) Traffic optimization device and traffic optimization method
Wang et al. eMPTCP: Towards high performance multipath data transmission by leveraging SDN
US9882820B2 (en) Communication apparatus
Nithin et al. Efficient load balancing for multicast traffic in data center networks using SDN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant