EP3090528A1 - Network communication methods and apparatus - Google Patents
Network communication methods and apparatusInfo
- Publication number
- EP3090528A1 EP3090528A1 EP14876520.9A EP14876520A EP3090528A1 EP 3090528 A1 EP3090528 A1 EP 3090528A1 EP 14876520 A EP14876520 A EP 14876520A EP 3090528 A1 EP3090528 A1 EP 3090528A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- network
- data transfer
- sdn
- application
- path
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000004891 communication Methods 0.000 title claims abstract description 16
- 238000005516 engineering process Methods 0.000 claims abstract description 37
- 230000004044 response Effects 0.000 claims abstract description 11
- 238000012546 transfer Methods 0.000 claims description 94
- 238000005457 optimization Methods 0.000 claims description 43
- 238000004088 simulation Methods 0.000 claims description 7
- 230000000694 effects Effects 0.000 claims description 2
- 238000012804 iterative process Methods 0.000 claims description 2
- 230000003287 optical effect Effects 0.000 description 76
- 230000008569 process Effects 0.000 description 25
- 238000004422 calculation algorithm Methods 0.000 description 21
- 238000012544 monitoring process Methods 0.000 description 20
- 238000004458 analytical method Methods 0.000 description 10
- 238000000605 extraction Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 239000000284 extract Substances 0.000 description 6
- 230000001413 cellular effect Effects 0.000 description 5
- 239000000395 magnesium oxide Substances 0.000 description 4
- 230000000737 periodic effect Effects 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 238000007726 management method Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 235000008694 Humulus lupulus Nutrition 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 2
- 239000010931 gold Substances 0.000 description 2
- 229910052737 gold Inorganic materials 0.000 description 2
- 230000006855 networking Effects 0.000 description 2
- 230000002441 reversible effect Effects 0.000 description 2
- 239000008186 active pharmaceutical agent Substances 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000012550 audit Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 238000013075 data extraction Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000001771 impaired effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012913 prioritisation Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 238000012384 transportation and delivery Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/70—Routing based on monitoring results
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/12—Avoiding congestion; Recovering from congestion
- H04L47/127—Avoiding congestion; Recovering from congestion by using congestion prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
- H04L67/61—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources taking into account QoS or priority requirements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/12—Discovery or management of network topologies
- H04L41/122—Discovery or management of network topologies of virtualised topologies, e.g. software-defined networks [SDN] or network function virtualisation [NFV]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/147—Network analysis or design for predicting network behaviour
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/40—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0852—Delays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0876—Network utilisation, e.g. volume of load or congestion level
- H04L43/0894—Packet rate
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/02—Topology update or discovery
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/38—Flow based routing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/64—Routing or path finding of packets in data switching networks using an overlay routing layer
Definitions
- the present invention relates generally to network optimization, and more particularly to non-disruptive optimization of SDN Flows across different data transfer technologies based on realtime monitoring of application-level metadata and infrastructure-level network metrics.
- SDN Software-defined networking
- Control Plane the layer of the routing architecture that determines the paths along which network traffic should be sent
- Forwarding Plane the layer that forwards network traffic along those paths to its desired destinations
- distributed Control Plane logic can be physically removed from multiple routing and switching devices and implemented, for example, in software running on a centralized server.
- the Control Plane maintains the "network map” (also referred to as the network topology or system topology), which defines network nodes (including physical network devices) and their interconnections. In addition, the Control Plane maintains the rules for "routing" network traffic among these network nodes.
- network map also referred to as the network topology or system topology
- This system topology encompasses not only physical connections among network nodes, but also manually configured static routes, status information from hardware devices and software- defined interfaces, as well as information derived from dynamic routing protocols.
- the Control Plane ultimately determines the "best" route for particular network traffic, optimizing for various desired goals, such as “quality of service” (QoS) or minimal network congestion.
- QoS quality of service
- the Forwarding Plane implements the routes established by the Control Plane - e.g., determining how to forward packets arriving at a router's inbound interface to its appropriate outbound interfaces.
- the Forwarding Plane implements lower-level functionality, such as extracting and analyzing information from packet headers and other fields, traversing and caching portions of large routing tables, and employing various search and memory management algorithms to efficiently route packets to their intended destinations.
- Control Plane and Forwarding Plane both exist within the same physical device (the Control Plane being distributed across many or all of the devices that make up the network).
- SDN physically separates the Control Plane from the Forwarding Plane, effectively requiring a more robust protocol for communication between the two Planes.
- OpenFlow from the Open Networking Foundation, or ONF is one popular standard that has emerged to implement a communications interface between physically disparate Control Plane and Forwarding Plane layers of an SDN network architecture. While other communications protocols could be employed, OpenFlow has achieved such widespread acceptance that it is often used synonymously with SDN.
- SDN Flow (or simply "traffic path”) is typically used to represent a path or route that particular network traffic can traverse between its source and destination network devices.
- an SDN network architecture facilitates virtualization not only of the lower-level hardware Infrastructure Layer, but also of the higher-level Application Layer.
- an SDN Controller can communicate, via well-defined Application Programming
- APIs Application Optimization
- System Optimization Application Optimization
- Predictive Optimization of an application or entire network can be implemented over time, for example, based on historical performance over various time periods or during a particular time of day.
- Various other advantages of SDN network architectures can be found, for example, at http://www.opennetsummit.org/why-sdn.html.
- network nodes consist not only of individual hardware devices (e.g., routers, switches, servers, optical transponders, multiplexers, etc.) that communicate with one another at the Infrastructure Layer, but also of more abstract "Application Components" (e.g., web servers, storage arrays and databases, Corporate LANs, E-Commerce storefronts, etc.) that are utilized by software applications and communicate with one another at the Application Layer.
- An SDN Controller can generate optimized SDN Flows among Application Components at the
- Application Layer which can then be translated into lower-level SDN Flows at the Infrastructure Layer (and implemented, for example, by the Forwarding Plane of various network devices interconnecting those Application Components).
- SDN Controllers Whether optimizing performance among low- level hardware devices at the Infrastructure Layer, or Application Components at the Application Layer, or even the overall system, SDN Controllers rely on data extracted from the network, often in real-time while individual applications are running.
- Network Metrics number of hops between nodes, bandwidth, latency, packet loss, etc.
- Low-level hardware devices such as routers and switches, at the Infrastructure Layer.
- Application Metadata provides more abstract performance data at the higher-level Application Layer.
- Application Metadata includes, for example, various performance metrics such as overall throughput, transaction rate, elapsed transaction times, thread counts, number of concurrent users, number of running instances, uptime or downtime metrics, database size or other storage metrics, memory/disk/CPU utilization rates, etc. It also includes errors, warnings and related events maintained in application log file entries, as well as monitoring metrics of message queues (e.g., messages per second, failures, etc.).
- Application Metadata includes data generated by dedicated monitoring systems, including security monitors, firewalls (including deep packet inspectors), WAN accelerators, load balancers, etc.
- an SDN Controller can detect such unusual conditions and other anomalies at the system level or within a particular application, and employ “network analytics” to address them specifically (e.g., by increasing bandwidth to relevant servers), or more generally by optimizing traffic flow among Application Components or the system as a whole to achieve a particular desired goal, such as minimizing network congestion, or providing a particular QoS (e.g., a maximum latency for a particular type of transaction), among other objectives.
- a particular desired goal such as minimizing network congestion, or providing a particular QoS (e.g., a maximum latency for a particular type of transaction), among other objectives.
- an SDN Controller typically generates a revised set of SDN Flows - i.e., a remapping of network traffic routes among network nodes, including Application Components (typically "endpoints" of an SDN Flow) and lower-level hardware devices such as routers and switches (typically intermediate nodes between the Application Components over which network traffic is routed).
- Application Components typically "endpoints" of an SDN Flow
- lower-level hardware devices such as routers and switches
- DTTs include wireless network devices, such as WiFi routers, cellular 3G, 4G and LTE routing devices, as well as various other types of routers, switches and other network devices.
- An SDN Controller with access to the entire system topology and application-to-application traffic paths is in a unique position to leverage this knowledge to better optimize SDN Flows - e.g., by generating one or more SDN Flows that traverse hardware components across multiple DTTs.
- a System Environment Monitor is employed to extract from the network both real-time and historical Network Metrics at the Infrastructure Layer, as well as Application Metadata at the Application Layer.
- Network analytics facilitate decisions based upon the differing characteristics of Application Components and lower-level hardware components across multiple DTTs.
- an SDN Controller generates modified sets of SDN Flows, and implements them in real time across a mixed technology (multi-DTT) network in a manner that avoids disrupting existing SDN Flows and other real-time network traffic.
- multi-DTT mixed technology
- This real-time automated feedback loop facilitates optimizations of SDN Flows to minimize network congestion and achieve various desired QoS objectives that are relevant to particular applications (Application Optimization), as well as to the entire system (System Optimization).
- Predictive Optimization is also employed based on the historical performance of applications and lower-level network hardware over time, using simulation to determine whether proposed changes to a set of SDN Flows yields an overall improvement in network performance or efficiency.
- FIG. 1A is a block diagram illustrating an embodiment of an overall system architecture of the System Environment of the present invention, including various respective Data Transfer Technologies (e.g., optical circuit-switched networks, packet-switched networks and various wireless cellular networks);
- Data Transfer Technologies e.g., optical circuit-switched networks, packet-switched networks and various wireless cellular networks
- FIG. IB is a block diagram illustrating alternative Paths for Dynamic Path Optimization in an embodiment of an overall system architecture of the System Environment of the present invention
- FIG. 2 is a block diagram illustrating embodiments of an SDN Controller and System
- FIG. 3 is a flowchart illustrating one embodiment of an automated feedback process for Dynamic Path Optimization across Data Transfer Technologies, based on the monitoring and collection of Application Metadata and Network Metrics while Applications are running on the System Environment of the present invention.
- FIG. 4A is a flowchart illustrating one embodiment of the Dynamic Path Recomputation step of FIG. 3.
- FIG. 4B is a flowchart illustrating an alternative embodiment of the Dynamic Path
- Recomputation step of FIG. 3 involving multiple Data Transfer Technologies.
- FIG. 5 is a graph illustrating various embodiments of the Implement Updated Data Transfer Topology step of FIG. 3.
- FIG. 6A is a flowchart illustrating one embodiment of a non-disruptive algorithm utilized in the Implement Updating Data Transfer Topology step of FIG. 3 in the scenario of FIG. 5 in which no common intermediate nodes are present between existing and desired paths.
- FIG. 6B is a flowchart illustrating one embodiment of a non-disruptive algorithm utilized in the Implement Updating Data Transfer Topology step of FIG. 3 in the scenario of FIG. 5 in which one or more common intermediate nodes are present between existing and desired paths.
- FIG. 1A is a block diagram illustrating one embodiment of an SDN Network 100a of the present invention, including a centralized SDN Controller 110 and a System Environment Monitor 115 for monitoring network nodes and extracting and analyzing real-time data reflecting the performance of SDN Network 100a at various levels of abstraction, including the Application Layer (containing various Application Components 120) and the lower-level Infrastructure Layer
- SDN Network 100a includes multiple different Data Transfer Technologies, including IP-based packet-switched networks (with DTT Devices such as Ethernet Switch 141 and Ethernet Router 146), optical circuit-switched networks (with DTT Devices such as Optical Cross Connect (MEMS) 153, Optical Cross Connect (electrical) 155, and Optical Multiplexer 160), as well as various wireless networks (including cellular 3G network 180 with DTT Devices such as UMTS Wireless device 185).
- DTT Devices such as Ethernet Switch 141 and Ethernet Router 146
- MEMS Optical Cross Connect
- electrical electrical
- Optical Multiplexer 160 Optical Multiplexer
- System Environment Monitor 115 monitors and extracts in real time Application Metadata 130 from Application Components 120 (utilized by various applications running on SDN Network 100a).
- SDN Controller 110 establishes communications with lower- level DTT Devices 140, both issuing control commands to and extracting lower-level Network Metrics 150 from such DTT Devices 140.
- extraction of these lower-level Network Metrics can be accomplished (in whole or in part) by System Environment Monitor 115.
- System Environment Monitor 115 and SDN Controller 110 extract and analyze real-time data from
- SDN Controller 110 defines individual SDN Flows (not shown) that traverse multiple different DTTs. This "cross-DTT" functionality is of great significance, as will be discussed in greater detail below.
- Network 100a includes one embodiment of a wide variety of Application Components 120, utilized by one or more different companies or individuals.
- a company such as Amazon, for example, operates a website managed by Web Server 122, as well as an E-Commerce Storefront 124, which operates in conjunction with a variety of other Application Components 120.
- Internet 170 represents a particular Data Transfer Technology (DTT), in this case a packet-switched "network of networks.”
- DTT Data Transfer Technology
- Various DTT Devices 140 interconnect these Application Components 120 to the Internet 170, including Ethernet Switches 141-145 and Ethernet Router 146.
- Ethernet Switches 141-145 and Ethernet Router 146.
- various combinations of network switches, routers and other packet-switched devices can be employed.
- network 100a is a mixed technology network, including multiple different Data Transfer Technologies.
- packet-switched Internet 170 is interconnected, via Ethernet Routers 146 and 147 (among other devices), with an optical circuit-switched DTT, including optical DTT Devices 140 such as Optical Cross-Connect (electrical) 155-157 switches in which switching is performed by electrical cross connect circuit switches (no access to bits, packets, etc.) connected to optical-to-electrical and electrical-to-optical converters, Optical Cross-Connect (MEMS) 153-154 and 158-159 switches in which switching is performed by pure optical circuit reconfiguration, and Optical Multiplexers 160-163.
- Optical Cross Connect switches may be based on MEMS or other optical switching technology and may provide wavelength selective (Wavelength selective switches) or wavelength independent switching (Optical Cross Connects) or various combinations of the above.
- Ethernet Routers 146 and 147 interconnect packet-switched Internet 170 to multiple wireless DTTs, such as Cellular 3G Network 180 (including DTT Devices 140 such as UMTS Wireless 185 device) and Cellular LTE Network 190 (including DTT Devices 140 such as LTE Wireless 195 device).
- DTTs such as Cellular 3G Network 180 (including DTT Devices 140 such as UMTS Wireless 185 device) and Cellular LTE Network 190 (including DTT Devices 140 such as LTE Wireless 195 device).
- optical circuit-switched DTT is also connected to other packet-switched DTT Devices 140, such as Ethernet Switches 148-149 and 151-152, which in turn are connected to a variety of Application Components 120, including User Clients 131, 137 and 138, and enterprise- level components for Order Management 132, Stock Control 133, Finance Ledgers 134, Payment Handling 136 and a centralized Database 135. As noted above, these enterprise-level components could facilitate applications shared by a single company or multiple different companies.
- Application Metadata 130 is collected from Application Components 120 by a centralized System Environment Monitor 115, while Network Metrics 150 at the Infrastructure Layer are collected from DTT Devices 140 by a centralized SDN Controller 110.
- SDN Controller 110 configures DTT Devices 140 by issuing SDN Flow control commands via the same connections used to extract Network Metrics 150.
- Application Metadata 130 and Network Metrics 150 can be collected by various different devices (and software), whether in a centralized or more distributed manner.
- network 100b is a block diagram illustrating alternative SDN Flows for "Dynamic Path Optimization" of the same system topology illustrated in FIG. 1A.
- SDN Controller 110 extracts and analyzes both Application Metadata 130 and Network Metrics 150 (as will be explained in greater detail below), the result of which is a revised set of SDN Flows designed to optimize for particular goals, such as a specific QoS or minimizing latency or network congestion.
- SDN Flow A 198a illustrates a communications path by which E-Commerce Storefront 124 communicates with Database 135, via various DTT Devices 140 and across multiple different Data Transfer Technologies (e.g., both an IP-based packet-switched network and an optical circuit-switched network).
- Messages initiated from E-Commerce Storefront 124 travel through Ethernet Switch 141 in the IP-based packet-switched network (first DTT), and then enter the optical circuit-switched network (second DTT) at Optical Cross-Connect (MEMS) 153. They continue in this second DTT through DTT Devices 154, 161, 163 and 159, after which they pass through Ethernet Switch 152 in the first DTT (IP -based packet-switched network) and are then received by Database 135.
- first DTT IP-based packet-switched network
- MEMS Optical Cross-Connect
- SDN Flow A 198a may experience unexpected problems (e.g., higher than normal latency, reduced throughput, etc), which it describes in its log files, extracted as Application Metadata 130 by System Environment Monitor 115 and reported to SDN Controller 110.
- Other problems may be detected directly by SDN Controller 110 (in one embodiment) by monitoring Network Metrics 150 at the Infrastructure Layer (e.g., slow ping times from a particular switch or router).
- SDN Controller 110 upon identifying problems from this real-time extraction of data, implements various algorithms to analyze alternative SDN Flows and select those that best fulfill achieve its desired optimization goals.
- SDN Controller 110 selects SDN Flow B 198b to replace SDN Flow A 198a.
- messages initiated from E-Commerce Storefront 124 will now travel through Ethernet Switch 141 and remain entirely in the IP-based packet-switched network (passing through Ethernet Switches 142, 143 and 145, Ethernet Routers 146 and 147, and then Ethernet Switches 149 and 152 before being received by Database 135).
- the non-disruptive manner in which SDN Controller 110 implements such revisions and replacements of SDN Flows will also be described in greater detail below.
- SDN Flow C 198c illustrates a communications path by which Web Server 122 communicates with Time Recording software 128, in this case entirely in an IP-based packet- switched network (passing through Ethernet Switches 141, 142, 143 and 145 before being received by Time Recording software 128).
- SDN Controller 110 may select an alternative SDN Flow D 198d (in a manner similar to that noted above) to address a problem identified from real-time monitoring and extraction of Application Metadata 130 and Network Metrics 150.
- a particular DTT Device 140 may be the source of a problem, while in other cases, multiple DTT Devices 140 or the transition from one DTT to another (e.g., IP to optical, or vice versa) may increase latency, reduce overall bandwidth, or fail entirely. Moreover, in other embodiments, multiple SDN Flows may be employed for communications among two DTT Devices 140 or the transition from one DTT to another (e.g., IP to optical, or vice versa) may increase latency, reduce overall bandwidth, or fail entirely. Moreover, in other embodiments, multiple SDN Flows may be employed for communications among two DTT Device 140 or the transition from one DTT to another (e.g., IP to optical, or vice versa) may increase latency, reduce overall bandwidth, or fail entirely. Moreover, in other embodiments, multiple SDN Flows may be employed for communications among two DTT Device 140 or the transition from one DTT to another (e.g., IP to optical, or vice versa) may increase latency, reduce overall bandwidth, or fail entirely. Moreover,
- Application Components 120 (e.g., one for each direction, or multiple alternative SDN Flows in the same direction - e.g., for different types of communications). [0051] It is important to emphasize the "cross-DTT" nature of both the analysis of extracted data and the selection and implementation of revised optimal SDN Flows which, as noted above, is facilitated by the ability of SDN Controller 110 to leverage its knowledge of the entire system topology and application-to-application traffic paths. Although the examples described above with reference to FIG. IB illustrate individual SDN Flows, it should be noted that the complexity of the SDN Flows increases exponentially as multiple applications share Application Components 120 across multiple DTTs in a mixed network.
- block diagram 200 illustrates embodiments of an SDN Controller 210 and System Environment Monitor 215 of the present invention (also illustrated in FIGs. 1A and IB as SDN Controller 110 and System Environment Monitor 115) in the context of a mixed technology (multi-DTT) network in which individual devices are physically connected, but not yet assigned SDN Flows.
- SDN Controller 210 and System Environment Monitor 215 of the present invention also illustrated in FIGs. 1A and IB as SDN Controller 110 and System Environment Monitor 115
- multi-DTT mixed technology
- SDN Controller 210 implements logically centralized Control Plane functionality for the various Application Components and DTT Devices (such as Application Components 120 and DTT Devices 140 in FIGs. 1A and IB), which in turn implement the Forwarding Plane functionality of the network.
- System Environment Monitor 215 discovers new devices and their connectivity, and monitors the operation of the various Application Components and DTT Devices (with the assistance of SDN Controller 210) that make up the network.
- System Environment Monitor 215 monitors certain real-time performance characteristics and other aspects of the system while applications are running, including Application Metadata derived from Application Components (number of concurrent users and running instances, database size, thread counts, transaction rates, message queue statistics, errors and problems reported in log files, etc.) as well as lower- level Network Metrics derived from DTT Devices (number of hops between nodes, bandwidth, latency, packet loss, etc.).
- Application Metadata number of concurrent users and running instances, database size, thread counts, transaction rates, message queue statistics, errors and problems reported in log files, etc.
- lower- level Network Metrics derived from DTT Devices number of hops between nodes, bandwidth, latency, packet loss, etc.
- SDN Controller 210 Prior to accepting connectivity requests 211, SDN Controller 210 establishes policy definitions and stores them in a repository, such as Policy Definitions DB (database) 255. For example, one simple policy might establish a minimum latency for an SDN Flow between any two nodes. By defining the detailed characteristics of the various connectivity policies supported by the system, clients or users of the system can initiate connectivity requests 211 without having to explicitly detail every such characteristic. For example, a "Gold" policy might be defined with specific values for minimum bandwidth, reliability, latency, cost, etc. [0056] The precise characteristics for each DTT Device (or device type) are maintained in Data Transfer Technology Characteristics DB 265. These characteristics include, for example, latency, bandwidth, reconfiguration speed, cost, etc.
- Data Transfer Topology DB 270 the physical connectivity among DTT Devices (physical topology) and their configured connectivity (logical topology - e.g., input to output port connections on an optical switch and other SDN Flow info) are maintained in Data Transfer Topology DB 270. As will be explained in greater detail below, this information
- Data Transfer Path Computation component 250 determines the effect of each DTT Device on the overall end-to-end SDN Flow.
- An Application Component (such as Web Server 122 in FIGs. 1A and IB) might initiate a request for connectivity 211 in order to establish communications with another Application
- These connectivity requests 211 may originate from provisioning software or other modules of an Application Component or DTT Device firmware, or from a user that directs an Application Component, for example, to initiate connectivity between two network nodes.
- Individual network ports are typically specified, whether for an Application Component (e.g., a TCP port utilized by a web server) or a lower-level DTT Device (e.g., an input or output port of an Ethernet or optical switch).
- Connectivity Policy Manager 257 Upon receiving a connectivity request 211, Connectivity Policy Manager 257 utilizes the policy info from Policy Definitions DB 255 to determine the precise characteristics of desired end- to-end connectivity paths (SDN Flows) across the network. It then issues requests to Data Transfer Path Computation component 250, which utilizes device info from Data Transfer Technology Characteristics DB 265 and network topology info from Data Transfer Topology DB 270 to derive optimal SDN Flows (as described in greater detail below with reference to FIGs. 4 A and 4B). It also updates Data Transfer Topology DB 270 in accordance with the SDN Flow revisions.
- Data Transfer Path Computation component 250 implements the revised SDN Flows via DTT Device Mediation component 225.
- Individual DTT Devices 220 (such as Ethernet 222, Optical 224 and wireless LTE 226 physical and virtual devices) implement the network's Forwarding Plane by transferring data from device to device across the network.
- FIG. 2 illustrates only these three examples of the many possible different Data Transfer Technologies that can be employed to transfer data.
- Ethernet module 227 issues "Ethernet-specific” Configuration Commands 212 to Ethernet DTT devices 222 to extract “Ethernet-specific” Network Metrics
- Optical module 228 issues "Optical-specific” Configuration Commands 212 to Optical DTT Devices 224 to extract "Optical-specific” Network Metrics
- LTE module 229 issues "LTE-specific” Configuration Commands 212 to LTE DTT Devices 226 to extract "LTE-specific” Network Metrics.
- Configuration Commands 212 configure DTT Devices 200 to control the transmission of data across the Forwarding Plane of the network, while enabling communication with the Control Plane.
- DTT Device Mediation component 225 Upon extracting these real-time Network Metrics, DTT Device Mediation component 225 relies upon Data Transfer Metric Collection component 240 to collect and organize these Network Metrics for forwarding to the Data Collector component 235 of System Environment Monitor 215.
- Network Metrics are extracted by SDN Controller 210 while Application Metadata are extracted by System Environment Monitor 215.
- the extraction of network data can be performed by a single module or allocated among various modules or physical devices.
- DTT Device Mediation component 225 collects these Network Metrics
- Data Collector component 235 of System Environment Monitor 215 collects Application Metadata as well, and organizes this Application Metadata and the Network Metrics received from Data Transfer Metric Collection component 240 of SDN Controller 210.
- the data are translated to normalized forms where appropriate, indexed for faster searching and then stored (both processed data and indices) in Application Metadata & Network Metrics DB 275.
- real-time extraction of Network Metrics (by DTT Device Mediation component 225) and Application Metadata (by Data Collector 235) is performed on a periodic basis. In other embodiments, such data extraction can be performed on demand, on an ad hoc basis or in accordance with virtually any other algorithmic schedule over time. In one embodiment, historical performance data (e.g., performance during particular times of day over the past year) are extracted and maintained to facilitate Predictive Optimization, as explained in greater detail below.
- Data Collector 235 extracts Application Metadata from a wide variety of Application
- Components 230 including, for example, Database 232, Security Monitor 234 and E-Commerce Storefront 236. Whether embodied as software subsystems running on a physical or virtual computer platform, or as dedicated hardware appliances, Application Components 230 typically perform a portion of the overall functionality of a more complex application. For example, a web server might be the front-end component of a company's retail enterprise, which typically also includes databases and other back-end functionality. To fulfill the overall functionality of a complex application, Application Components 230 must exchange information across a network. The amount of data and frequency of data transfers can vary considerably over time among different Application Components 230, depending on a variety of factors.
- System Environment Monitor 215 continuously receives "on demand" external requests 216 to monitor an Application Component 230 - e.g., from an application delivery controller or provisioning system, or from a user of the system requesting discovery and monitoring of a new Application Component 230.
- requests 216 are scheduled on a periodic or ad hoc basis, or in accordance with a more complex algorithm.
- Component 280 utilizes both the existing logical topology stored in Data Transfer Topology DB 270 of SDN Controller 210 and the extracted data stored in Application Metadata & Network Metrics DB 275 to identify which Application Components 230 have exchanged information, and to build over time an Application Component Topology DB 285 which is independent of the underlying Data Transfer Topology 270.
- Application Component Topology DB 285 maintains the logical topology that identifies which Application Components 230 have communicated with one another.
- DB 285 can be used, for example, to determine when network performance issues affect the performance of the Application Components 230 utilized by a particular software application.
- Application Data Transfer Analysis component 295 utilizes network topology info from DB 270 and DB 285, along with extracted data from Application Metadata & Network Metrics DB 275, to implement various network analytics functionality.
- Network Metrics extracted by SDN Controller 210 and Application Metadata extracted by System Environment Monitor 215 are collected and integrated by Data Collector 235, analyzed by Application Data Transfer Analysis component 295, and then utilized by Data Transfer Path Computation component 250 to identify network operation and performance problems and to derive revised SDN Flows that are optimized for various desired goals (i.e., Dynamic Path Optimization).
- this automated feedback loop runs continuously in real time - extracting network data from running applications and utilizing such data to generate "optimal" revised SDN Flows, from which subsequent network data is generated and extracted (completing the loop).
- Dynamic Path Optimization can optimize the performance of an individual software application (Application Optimization) or the system as a whole, encompassing all software applications currently running on a mixed technology (multi-DTT) network (System Optimization).
- Application Data Transfer Analysis component 295 can also assist component 250 in this regard by testing proposed optimal SDN Flows against historical network traffic patterns (Predictive Optimization), including the following functionality:
- Dynamic Path Optimization e.g., optimizing for a set of software applications
- Application Component Topology Discovery module 280 may dynamically prioritize certain paths in Data Transfer Topology DB 270 directly, or indirectly via Application Component Topology DB 285 and Application Data Transfer Analysis module 295.
- the prioritization i.e., assigning rank order
- SDN Controller 110 may automatically perform proactive actions before more extreme problems are detected, such as an optical device becoming unsuitable to transmit data.
- proactive actions include: (1) Changing an active traffic path (e.g., optical) to an alternate path (e.g., optical or electrical) if it is impaired, but before it fails (e.g., pre-FEC BER monitoring as a switching decision trigger);
- a protection traffic path on standby e.g., optical
- an alternate path e.g., optical power loss as a decision trigger
- flowchart 300 illustrates one embodiment of this automated feedback process for Dynamic Path Optimization across multiple Data Transfer Technologies, based on the monitoring and collection of Application Metadata and Network Metrics while software applications are running on a mixed technology (multi-DTT) network.
- an initial SETUP process 310 occurs after an SDN Controller and System Environment Monitor (such as those detailed in FIG. 2 above) are configured and Application Components and DTT Devices are physically interconnected, but before network topologies are discovered and application traffic is flowing across the network.
- Policy Definitions DB 255 is initialized in step 312 with connectivity policies supported by the system.
- DTT Device characteristics are identified and stored in Data Transfer Technology Characteristics DB 265 in step 314 (including, for example, information about which devices can interoperate, along with their data rates, latency, setup time and other relevant characteristics and behaviors).
- Data Transfer Technology Characteristics DB 265 including, for example, information about which devices can interoperate, along with their data rates, latency, setup time and other relevant characteristics and behaviors.
- the physical topology and connectivity of the DTT Devices is discovered and stored in Data Transfer Topology DB 270 in step 316.
- the system begins to receive connectivity requests 320 (e.g., for a new DTT Device) and monitoring requests 330 (e.g., for a new Application Component or communication path between Application Components).
- Requests 320 and 330 may be initiated by system administrators or other users, as well as from DTT Device firmware, Application Component initialization modules, and other sources.
- the system also simultaneously begins to monitor the operation and performance of DTT Devices (to collect Network Metrics in step 340) and Application Components (to collect Application Metadata in step 350).
- System Environment Monitor 215 needs to discover the topology of Application Components in step 332 in order to begin to monitor them. Whether monitoring is initiated manually (e.g., by a user or system administrator) or systematically (e.g., by an initialization software module of an Application Component), each Application Component is identified by a computer system hostname, DTT Device port or some other distinguishing characteristic. Alternatively, System Environment Monitor 215 can utilize information from Data Transfer Topology DB 270 as a starting point to facilitate automatic discovery of Application Components (e.g., by monitoring Application Metadata from SDN Flow endpoints defined in DB 270).
- Application Component Topology Discovery component 280 uses the information from the Data Transfer Topology DB 270 to determine which Application Components are associated by virtue of a data transfer relationship, which it then records by updating Application Component Topology DB 285 in step 334.
- step 332 is invoked not only in response to monitoring requests 330, but also in response to modifications of Data Transfer Topology DB 270, which in turn may result in changes to the logical topology of Application Components, and thus require an update to Application
- Data Collector 235 gathers all available Application Metadata in step 350 using a variety of well-known techniques including reading log files, collecting syslog events, or monitoring the content of traffic passing among Application Components.
- Application Metadata is normalized in step 352, for example by mapping input data into a set of common data fields (e.g., to distinguish among different application log file formats) and employing keyword recognition to parse the input data.
- Data Collector 235 builds and continually maintains an index of the Application Metadata to enable fast and efficient searching. It then updates Application Metadata & Network Metrics DB 275 in step 354, including all gathered Application Metadata and index data associated with the various Application Components.
- SDN Controller 210 collects Network Metrics in step 340 from Data Transfer Technology Devices, in one embodiment relying upon DTT Device Mediation component 225 as described above. It then relies upon Data Collector 235 in System Environment Monitor 215 to normalize the Network Metrics in step 342, and update Application Metadata & Network Metrics DB 275 in step 344. In other embodiments, separate databases are maintained for Application Metadata and Network Metrics.
- SDN Controller 210 After updating Application Component Topology DB 285 (in step 334) or Application Metadata & Network Metrics DB 275 (in step 354 or 344), SDN Controller 210 checks in step 375 to determine whether any software applications are still running. If no software applications are running, the process ends in step 380. In one embodiment, the system remains active indefinitely until a request 330 is received to monitor an Application Component.
- Connectivity Policy Manager 257 of SDN Controller 210 proceeds in step 325 to recompute "optimal" SDN Flows based on an analysis of Application Metadata and Network Metrics. Step 325 will be described in greater detail below with respect to FIGs. 4A and 4B.
- Step 325 is also performed in response to connectivity requests 320 regarding DTT Devices.
- Typical connectivity requests 320 include port identifiers on endpoint DTT Devices and an initial target policy.
- Connectivity Policy Manager 257 retrieves the requested policy definition from Policy Definitions DB 255 in step 322. For example, a "Low Latency Gold" policy might be defined in Policy Definitions DB 255 with specific values for latency, bandwidth, reliability and other factors.
- Connectivity Policy Manager 257 also stores request 320 in an "intended connectivity" database in step 324 before proceeding to step 325 to recompute "optimal" SDN Flows.
- step 325 is invoked to recompute optimal SDN Flows in response to connectivity requests 320 regarding DTT Devices and monitoring requests 330 regarding Application Components, as well as in response to updates to Application Metadata & Network Metrics DB 275 (based on collection of Network Metrics in step 340 and Application Metadata in step 350).
- Data Transfer Path Computation component 250 of SDN Controller 210 is triggered to recompute "optimal" SDN Flows in step 325 both by changes to the network topology and by problems it detects in the operation and performance of software applications running on the network. It then updates Data Transfer Topology DB 270 with the new SDN Flows in step 326, and proceeds to implement the updated Data Transfer Topology in step 328.
- Data Transfer Path Computation component 250 passes a description of the chosen "optimal" paths (in terms of explicit hop-by-hop topology) to DTT Device Mediation component 225, which then decomposes the paths into a set of configuration commands 212 to be sent to each of the DTT modules (227, 228 and 229).
- step 328 is implemented in a "non-disruptive" manner to avoid disrupting existing SDN as well as other real-time network traffic generated by applications currently running on the network.
- flowchart 400 illustrates one embodiment of Dynamic Path
- Recomputation step 325 the process of computing "optimal" SDN Flows is performed without regard to the differences among different Data Transfer Technologies.
- FIG. 4B discussed below, illustrates an alternative embodiment that takes certain of these differences into account.
- Data Transfer Path Computation component 250 of SDN Controller 210 initiates step 410 whenever step 325 of FIG. 3 is invoked, providing access to Data Transfer Topology DB 270 as well as current Application Metadata and Network Metrics extracted from respective Application
- Data Transfer Path Computation component 250 implements an iterative algorithm to reduce the number of individual traffic paths which experience problems (such as congestion, high latency and eventually failure), thereby improving the overall performance of the software applications running on the network by revising existing SDN Flows to alternate routes across the network.
- Other algorithms may of course be implemented to achieve other "optimizations" of various operational and performance factors (for individual or groups of software applications as well as the entire system) without departing from the spirit of the present invention.
- Data Transfer Path Computation component 250 populates a temporary copy of Data Transfer Topology DB 270, which can be adjusted iteratively (e.g., to test potential revised SDN Flows) without affecting the network's actual topology. It then creates a list of "problem" traffic paths, in step 420, based upon Application Metadata collected and analyzed over time - in one embodiment, by System Environment Monitor 215, which monitors software application response times, message queue timings, timeout errors in application logs, etc.
- Data Transfer Path Computation component 250 also relies on lower-level Network Metrics to identify potential sources of such problems. It derives, in step 414, a traffic profile for each SDN Flow, based on the Network Metrics collected over time (e.g., one-minute samples of average bandwidth). A similar traffic profile can be calculated, in step 416, for each physical link (e.g., by directly monitoring DTT Devices or deriving aggregate SDN Flow traffic profiles - since many SDN Flows can pass over the same physical link, sharing the available bandwidth).
- a traffic profile for each SDN Flow based on the Network Metrics collected over time (e.g., one-minute samples of average bandwidth).
- a similar traffic profile can be calculated, in step 416, for each physical link (e.g., by directly monitoring DTT Devices or deriving aggregate SDN Flow traffic profiles - since many SDN Flows can pass over the same physical link, sharing the available bandwidth).
- component 250 then implements an algorithm to derive a measure of congestion of the network as a whole. For example, it could simply count physical links where the traffic profile exceeds 90% of available bandwidth, or latency exceeds 50ms for more than five sample periods. As noted above, other algorithms could be employed to measure other desired optimization goals for one or more software applications or for the system as a whole, relying on "snapshot" or periodic real-time measurements and/or historical data compiled over time.
- step 421 component 250 then uses the traffic path and physical link data from step 418, and the set of "problem" Application Layer traffic paths from step 420, to create a list of physical link candidates for optimization. For example, one approach could require that a physical link candidate exhibit one of the following criteria:
- component 250 identifies the list of SDN Flows that traverse that physical link candidate.
- each physical link candidate in step 422.
- component 250 derives an alternative SDN Flow that does not include that physical link candidate.
- a set of alternative SDN Flows can be derived, each being processed in turn in an "intermediate loop.”
- Component 250 then processes (in an "inner loop") the alternative SDN Flow in step
- each physical link along the alternative SDN Flow is assessed.
- the previously observed traffic profiles for all SDN Flows that include that physical link are summed, thereby generating a traffic profile for that physical link.
- These calculated traffic profiles for each physical link in the alternative SDN Flow are then summed to generate a simulated traffic profile for the alternative SDN Flow.
- Component 250 then reassesses the overall network congestion in step 424 - i.e., performing the calculation from step 418 with the new topology and simulated traffic profile for the alternative SDN Flow being processed. It then compares, in step 425, this simulated measure of network congestion with the value previously obtained in step 418. If the level of congestion has been reduced, Data Transfer Topology DB 270 is updated, in step 426, with the alternative SDN Flow replacing the current SDN Flow. Otherwise, the alternative SDN Flow is discarded in step 427. [00102] In either case, component 250 then determines, in step 435, whether additional SDN
- step 436 the next SDN Flow (from the list generated in step 421) and repeats step 422 for that SDN Flow (i.e., until the list of SDN Flows traversing the candidate physical link is exhausted).
- component 250 determines, in step 445, whether any candidate physical links remain to be processed. If so, it selects, in step 446, the next candidate physical link (from the list generated in step 421) and repeats step 422 for each SDN Flow that traverses that candidate physical link (until, eventually, the list of candidate physical links is exhausted, at which point the process terminates at step 447 - at least until step 325, and thus step 410, is again invoked).
- the process illustrated in flowchart 400 can be performed on a periodic basis (rather than "on demand") or in accordance with another algorithm. For example, after a predefined amount of time has passed, the measure of network congestion could be assessed as described above in step 418, and the process allowed to continue only if network congestion had not yet been reduced by a predefined threshold margin.
- optimization goals could include maximizing network throughput or bandwidth, as well as minimizing latency, overall number of ports or DTT Devices in use, power consumption, error rates, or combinations of these various factors.
- system could optimize for these factors across individual or multiple SDN Flows or software applications, or combinations thereof, all without departing from the spirit of the present invention.
- FIG. 4A does not illustrate the consequences resulting from the differences among different Data Transfer Technologies.
- Flowchart 450 in FIG. 4B illustrates an alternative embodiment that takes certain of these differences into account, beginning with step 460, analogous to step 410 of FIG. 4A.
- multi-DTT mixed technology
- an IP-based packet- switched network is generally more flexible than an optical circuit-switched network, in that it can simultaneously support many different data streams with different endpoints.
- Each packet stream transiting through a given port of an IP-based network device can be sent to different destinations, whereas all packet streams transmitted on a given wavelength across optical network devices must follow the same network route between source and destination endpoints.
- optical circuit- switched networks typically have lower latency than IP-based packet-switched networks, provide connectivity across longer distances, and use less power per network port. These differing characteristics can be exploited during the optimization process.
- MEMS devices in an optical circuit- switched network can be employed as an adjunct to an IP-based packet-switched network to reduce congestion on the packet-switched network, thus reducing the number (and thus the cost) of packet devices as well as overall power consumption.
- the SDN Flows discussed above with respect to FIG. IB illustrate this point.
- SDN Flow B 198b relies exclusively on the packet- switched network, while network traffic on SDN Flow A 198a starts in the packet-switched network, transits through the circuit-switched network and then returns to the packet-switched network.
- SDN Flow A 198a it should be noted that other SDN Flows (e.g., from Application Components 122, 123 and 124) may transit between Ethernet Switch 141 and Optical Cross-Connect (MEMS) 153, but with different destination endpoints.
- MEMS Optical Cross-Connect
- SDN Flow A 198a if only a single optical port were available for network traffic between Ethernet Switch 141 and Optical Cross-Connect (MEMS) 153, then network traffic on SDN Flow D 198d could not utilize that portion of its path because such network traffic is destined for a different endpoint (Time Recording module 128).
- time Recording module 128 time Recording module
- Flowchart 450 in FIG. 4B illustrates an iterative process of simulating the traffic patterns that would result if a given traffic path was moved from a packet network to an optical network for the purpose of relieving congestion detected in the packet network.
- Data Transfer Path Computation component 250 populates, in step 462, a temporary copy of Data Transfer Topology DB 270, and creates, in step 470, a list of "problem" traffic paths based upon Application Metadata collected and analyzed over time.
- step 464 a traffic profile for each SDN Flow, based on the Network Metrics collected over time, in this case from both packet-switched and circuit-switched DTT Devices. And it calculates a similar traffic profile for each physical link in step 466, again taking into account the fact that many SDN Flows can pass over the same physical link, sharing the available bandwidth.
- step 468 it implements an algorithm to derive a measure of congestion of (in this embodiment) the packet-switched network.
- Component 250 uses the traffic path and physical link data from step 468, and the set of "problem" Application Layer traffic paths from step 470, to create, in step 472, a list of physical link candidates for optimization, as well as a set of SDN Flows that traverse that physical link.
- step 474 while processing (in an "outer loop") each physical link candidate, and (in an “inner loop”) each SDN Flow that traverses that physical link, component 250 not only seeks to identify alternative SDN Flows which avoid traversing that physical link (as in FIG. 4A), but limits those alternative SDN Flows to those that cross from the packet-switched network into the optical circuit-switched network. In other words, it seeks an "optical alternative” to reduce the congestion in the packet network.
- step 475 If no such optical alternative exists (tested in step 475), it identifies, in step 492, the set of existing "optical paths" (i.e., SDN Flows that traverse that same "entry point” from the packet network into the optical network) that, if moved back to the packet network, would “free up” an optical alternative.
- component 250 selects, in step 494, the optical path with the lowest average bandwidth, and moves that optical path back to the packet network (for simulation purposes).
- different algorithms could be employed in other embodiments, such as simulating all optical alternatives, prioritizing the selection of an optical alternative to those that exhibit low latency, etc.
- component 250 selects, in step 476, all SDN Flows in the packet network that could share that same path from the packet network to the optical network. It then simulates, in step 477, the historical traffic profile with the new optical alternatives replacing the original (existing) SDN Flows, and, in step 478, reassesses the overall congestion of the packet-switched network.
- step 480 it compares the simulated level of congestion from step 478 with the existing level of congestion calculated in step 468 to determine whether the optical alternatives would reduce this level of congestion. If so, Data Transfer Topology DB 270 is updated, in step 482, with the optical alternatives replacing the current ("packet only") SDN Flows. Otherwise, the optical alternatives are discarded in step 484.
- Component 250 determines, in step 485, whether any additional SDN Flows that traverse the candidate physical link (identified in step 472) remain to be processed. If so, it selects, in step 496, the next SDN Flow (from the list generated in step 472) and repeats step 474 for that SDN Flow (i.e., until the list of SDN Flows traversing the candidate physical link is exhausted).
- component 250 determines, in step 490, whether any candidate physical links remain to be processed. If so, it selects, in step 498, the next candidate physical link (from the list generated in step 472) and repeats step 474 for each SDN Flow that traverses that candidate physical link (until, eventually, the list of candidate physical links is exhausted, at which point the process terminates at step 499 - at least until step 325, and thus step 460, is again invoked).
- Application Metadata and Network Metrics (also across one or more DTTs) while software applications are running on the system. Moreover, it can optimize for overall System Performance, individual Application Performance and/or Predictive Performance over time.
- SDN Paths have been identified (and updated in Data Transfer Topology DB 270), such SDN Paths must be implemented across the network. As noted above, and illustrated in FIGs. 5 and 6A-6B, the replacement of SDN Paths is performed in real time, in accordance with the present invention, in a non-disruptive manner to avoid costly software application downtime or errors.
- graph 500 illustrates various embodiments of the Updating Data
- graph 500 illustrates interconnected network nodes A 510, B 515, C 520, D 525, E 530, F 535, G 540, H 545, J 550, K 555, L 560, M 565, and N 570.
- graph 500 illustrates four sets of "before and after" SDN Paths which are implemented in a non-disruptive manner by flowchart 600 in FIG. 6A (where no common intermediate nodes are present between existing and desired paths) and flowchart 650 in FIG. 6B (where one or more common intermediate nodes are present between existing and desired paths).
- SDN Path la represents an existing SDN Path between node A 510 and node D 525 (i.e., A-B-C-D), while SDN Path lb represents its desired replacement SDN Path between node A 510 and node D 525 (i.e., A-K-L-D).
- SDN Path la represents an existing SDN Path between node A 510 and node D 525 (i.e., A-B-C-D)
- SDN Path lb represents its desired replacement SDN Path between node A 510 and node D 525 (i.e., A-K-L-D).
- Flowchart 600 of FIG. 6A illustrates one embodiment of a non-disruptive algorithm for replacing an SDN Path (e.g., SDN Path la) with another SDN Path (e.g., SDN Path lb) in the scenario in which the "before and after" SDN Paths share no common intermediate nodes.
- flowchart 600 illustrates a non-disruptive algorithm utilized in the Implement Updated Data Transfer Topology step 325 of FIG. 3. In particular, it addresses the replacement of an SDN Path with one that shares no common intermediate nodes with the one being replaced.
- a primary goal of this process is to facilitate a "hitless update" - i.e., a reconfiguration of the logical traffic paths in the network without triggering a loss of connectivity.
- Other goals include avoiding reductions in network bandwidth or throughput, as well as other significant interruptions in the operation of any of the software applications running on the network at the time of the reconfiguration.
- SDN Controller 210 is in a unique position to implement this process by leveraging its knowledge of the network topology and application-to-application traffic paths. In addition to this knowledge, SDN Controller 210 has programmatic control over the Forwarding Plane functionality implemented by the various DTT Devices on the network (as described above with reference to FIG. 2). It defines forwarding rules on each DTT Device, in one embodiment, based on its ingress port, egress port and a bit-mask.
- Updated Data Transfer Topology step 325 of FIG. 3 is invoked, and it is determined that the "before and after" SDN Paths share no common intermediate nodes.
- SDN Controller 210 adds the appropriate forwarding rule concurrently to each intermediate node in the desired SDN Path. For example, with reference to "after" SDN Path lb from FIG. 5 (A-K-L-D), a forwarding rule is added to node K 555 to implement the K-L portion of the path, and a forwarding rule is added to node L 560 to implement the L-D portion of the path. Note that, at this point, all relevant traffic will still be following existing "before" SDN Path la (A-B-C-D) because traffic from node A 510 is still being forwarded to node B 515.
- step 610 the "after" forwarding rule (from node A 510 to node K 555) is added to the initial node A 510 in SDN Path lb. Then, in step 615, the existing "before" forwarding rule (from node A 510 to node B 515) is removed from node A 510.
- SDN Controller 210 then waits, in step 620, e.g., a predetermined and configurable amount of time for network traffic currently traversing SDN Path la to arrive at its intended destination (node D 525). Alternatively, SDN Controller 210 waits until receiving a confirmation that the "before" path is no longer carrying traffic.
- SDN Controller 210 removes, in step 625, all of the existing forwarding rules from the intermediate nodes in "before" SDN Path la (i.e., node B 515 and node C 520). Note that, since the "A-B" forwarding rule was removed, a sufficient amount of time has elapsed for all network traffic forwarded from node A 510 to node B 515 to arrive at its intended destination (node D 525). All subsequent relevant network traffic will have been forwarded from node A 510 to node K 555, thereby implementing a "hitless update" to "after" SDN Plan lb.
- step 625 of FIG. 3 is invoked, as in FIG. 6A). Then, in step 655, SDN Controller 210 compares the "before and after" SDN Paths to determine whether they share any common intermediate nodes. If not, then, at step 672, SDN Controller 210 implements the "no common intermediate nodes" algorithm described above in flowchart 600 of FIG. 6A, after which the process ends at step 680.
- SDN Controller 210 compares the "before and after" SDN Paths in step
- SDN Controller 210 implements, for each segment, the "no common intermediate nodes" algorithm described above in flowchart 600 of FIG. 6A, after which the process ends at step 680.
- the segments generated in step 674 start at the source node and run to the first common intermediate node, then to the second intermediate node (if any), and so on.
- SDN Paths 2a-2b for example, which share only one common intermediate node (node D 525)
- two path segments are generated.
- the first segment in SDN Path 2a would be A-B-C-D, while the first segment in SDN Path 2b would be A-K-L-D.
- the second segment in SDN Path 2a would be D-E-F, while the second segment in SDN Path 2b would be D-M-F. Note that neither set of segments shares any common intermediate nodes.
- SDN Paths share two or more common intermediate nodes in a different order, then it creates temporary paths to swap any two of the common intermediate nodes - by removing (temporarily) one of the common intermediate nodes from the existing "before" SDN Path, and then re-adding it to effectuate the swap.
- step 665 it repeats step 665 to determine whether it still shares two or more common intermediate nodes, in a different order, with the "after" SDN Path. If not, then it performs step 674 to split the SDN Paths into segments (as explained above with respect to SDN Paths 2a and 2b) and step 676 to implement the "no common intermediate nodes" algorithm for each segment.
- step 670 If, however, the "before and after" SDN Paths still share two or more common intermediate nodes in a different order, then it repeats step 670 (and step 665) until the SDN Paths no longer share two or more common intermediate nodes in a different order, at which point it then repeats steps 674 and 676. Note that ability of the system to identify temporary paths in step 670 is subject to physical device limitations, network congestion, available bandwidth, etc.
- step 665 would be repeated only once. Either node M 565 would be removed from SDN Path 3a and then re-added before node E 530 to effectuate the swap (resulting in D-M-E-F), or node E 530 would be removed and re-added after node M 565, also resulting in D-M-E-F.
- step 670 and 665 would be repeated multiple times. For example, if nodes G 540 and H 545 are initially swapped, yielding F-H-G-N-J, then nodes H 545 and N 570 would still be "out of order” and require another swap in step 670, yielding F-N-G-H-J, which would require a final swap of G 540 and H 545 (again) to finally yield F-N-H-G-J, at which point the process would proceed to step 674.
- well-known algorithms are employed to minimize the required number of "swaps" performed in step 670 to yield the desired SDN Path.
- SDN Controller 210 is invoked to reoptimize
- SDN Paths under various conditions while software applications are running on the system, and Application Metadata and Network Metrics are being collected in real time. For example, a user may initiate a request for such a reoptimization upon detecting a "problem" with one or more software applications or a particular DTT Device.
- SDN Controller 210 may itself automatically detect a problem and initiate such a request (e.g., based upon the detection of a new Application Component or DTT Device, the extraction and analysis of Application Metadata and Network Metrics, or other "real-time" or historical change in the operation of the network).
- this reoptimization of SDN Paths involves an iterative algorithm to reduce the number of individual SDN Paths that experience congestion by generating alternate routes across the network.
- SDN Controller 210 derives network traffic graphs based on the Network Metrics collected over time (e.g., a set of average bandwidth in half-hour samples). It then overlays time periods where Application Metadata monitoring indicates a problem or poor performance of one or more software applications using existing SDN Paths. If software application performance problems are indicated in a sampling period where the network utilization is relatively high, for example, then the applicable SDN Paths are deemed candidates for optimization.
- SDN Controller 210 derives a set of possible alternative SDN Paths across the network (e.g., between Application Components).
- each candidate SDN Path is assigned a rank order based on the network characteristics desired to be optimized for particular network services (e.g., latency, throughput, etc.), which in turn is based on the original request as well as the observed requirements of the Application Components and DTT Devices.
- the system simulates network performance (overall, or of one or more software applications) in one embodiment by selecting "top-ranked" SDN Paths, overlaying them across the network topology and combining previously generated traffic graphs or profiles.
- SDN Controller 210 determines whether the cumulative sum of all the traffic paths that share a particular physical link has relieved (or exacerbated) network congestion. This simulation is repeated iteratively (selecting next-ranked alternative SDN Paths) until a set of SDN Paths is found which, for example, decreases the number of SDN Paths that would have experienced congestion using the historical data.
- SDN Controller 210 then implements these SDN Paths in the network in a non-disruptive manner as described above.
- reducing the number of congested (or otherwise non-optimal) physical links is but one form of network optimization, whether applied to the network as a whole (System Optimization) or limited to one or more software applications (Application Optimization), and whether based on historical (as well as real-time) Application Metadata and Network Metrics extracted while software applications are running on the network (Predictive Optimization).
- Other measures of "optimal" performance could be employed, such as latency, throughput, pre-FEC bit error rate degradation, or even power consumption of DTT Devices or components thereof, individually or in the aggregate.
- a performance metric extracted from an optical amplifier in a particular optical DTT Device might contribute to a higher-level problem detected in a software application that communicates over one or more SDN Paths that cross both packet and optical DTTs.
- the "optimal solution” may be an alternative set of re-routed SDN Paths that utilize a different Optical Switch and/or require dynamic reconfiguration of one or more optical cross connects or dynamic tuning of one or more transmitting lasers to implement the revised SDN Paths.
- the system of the present invention is able to continually reoptimize the SDN Paths utilized by software applications running on the network in a non-disruptive manner.
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/143,726 US20140193154A1 (en) | 2010-02-22 | 2013-12-30 | Subchannel security at the optical layer |
PCT/US2014/072807 WO2015103297A1 (en) | 2013-12-30 | 2014-12-30 | Network communication methods and apparatus |
Publications (2)
Publication Number | Publication Date |
---|---|
EP3090528A1 true EP3090528A1 (en) | 2016-11-09 |
EP3090528A4 EP3090528A4 (en) | 2017-09-20 |
Family
ID=53493992
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP14876520.9A Withdrawn EP3090528A4 (en) | 2013-12-30 | 2014-12-30 | Network communication methods and apparatus |
Country Status (2)
Country | Link |
---|---|
EP (1) | EP3090528A4 (en) |
WO (1) | WO2015103297A1 (en) |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9923779B2 (en) * | 2015-07-20 | 2018-03-20 | Schweitzer Engineering Laboratories, Inc. | Configuration of a software defined network |
US10149193B2 (en) | 2016-06-15 | 2018-12-04 | At&T Intellectual Property I, L.P. | Method and apparatus for dynamically managing network resources |
US10505870B2 (en) | 2016-11-07 | 2019-12-10 | At&T Intellectual Property I, L.P. | Method and apparatus for a responsive software defined network |
US10673751B2 (en) | 2017-04-27 | 2020-06-02 | At&T Intellectual Property I, L.P. | Method and apparatus for enhancing services in a software defined network |
US10749796B2 (en) | 2017-04-27 | 2020-08-18 | At&T Intellectual Property I, L.P. | Method and apparatus for selecting processing paths in a software defined network |
US10819606B2 (en) | 2017-04-27 | 2020-10-27 | At&T Intellectual Property I, L.P. | Method and apparatus for selecting processing paths in a converged network |
US10382903B2 (en) | 2017-05-09 | 2019-08-13 | At&T Intellectual Property I, L.P. | Multi-slicing orchestration system and method for service and/or content delivery |
US10257668B2 (en) | 2017-05-09 | 2019-04-09 | At&T Intellectual Property I, L.P. | Dynamic network slice-switching and handover system and method |
CN111095882B (en) * | 2017-06-29 | 2021-06-08 | 华为技术有限公司 | System and method for predicting flows in a network |
US10070344B1 (en) | 2017-07-25 | 2018-09-04 | At&T Intellectual Property I, L.P. | Method and system for managing utilization of slices in a virtual network function environment |
US10104548B1 (en) | 2017-12-18 | 2018-10-16 | At&T Intellectual Property I, L.P. | Method and apparatus for dynamic instantiation of virtual service slices for autonomous machines |
US11005777B2 (en) | 2018-07-10 | 2021-05-11 | At&T Intellectual Property I, L.P. | Software defined prober |
CN109039612B (en) * | 2018-09-11 | 2021-03-12 | 北京智芯微电子科技有限公司 | Secure interaction method and system for software defined optical network |
CN110213129B (en) * | 2019-05-29 | 2021-07-06 | 新华三技术有限公司合肥分公司 | Forwarding path time delay detection method, controller and forwarding equipment |
CN111726255B (en) * | 2020-06-23 | 2022-10-18 | 中国工商银行股份有限公司 | Processing method and device for network change |
CN113079038B (en) * | 2021-03-24 | 2023-04-25 | 广州市百果园信息技术有限公司 | Network quality evaluation method, device, server and storage medium |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6505254B1 (en) * | 1999-04-19 | 2003-01-07 | Cisco Technology, Inc. | Methods and apparatus for routing requests in a network |
JP2004260671A (en) * | 2003-02-27 | 2004-09-16 | Nippon Telegr & Teleph Corp <Ntt> | Path extraction device, path extraction method, and path extraction program of network, and recording medium |
US7627671B1 (en) * | 2004-05-22 | 2009-12-01 | ClearApp, Inc. | Monitoring and performance management of component-based applications |
US8442030B2 (en) * | 2007-03-01 | 2013-05-14 | Extreme Networks, Inc. | Software control plane for switches and routers |
US7636789B2 (en) * | 2007-11-27 | 2009-12-22 | Microsoft Corporation | Rate-controllable peer-to-peer data stream routing |
US7978632B2 (en) * | 2008-05-13 | 2011-07-12 | Nortel Networks Limited | Wireless mesh network transit link topology optimization method and system |
US9350671B2 (en) * | 2012-03-22 | 2016-05-24 | Futurewei Technologies, Inc. | Supporting software defined networking with application layer traffic optimization |
-
2014
- 2014-12-30 EP EP14876520.9A patent/EP3090528A4/en not_active Withdrawn
- 2014-12-30 WO PCT/US2014/072807 patent/WO2015103297A1/en active Application Filing
Also Published As
Publication number | Publication date |
---|---|
WO2015103297A1 (en) | 2015-07-09 |
EP3090528A4 (en) | 2017-09-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10715414B2 (en) | Network communication methods and apparatus | |
EP3090528A1 (en) | Network communication methods and apparatus | |
US11700196B2 (en) | High performance software-defined core network | |
US10542076B2 (en) | Cloud service control and management architecture expanded to interface the network stratum | |
US10616074B2 (en) | System, apparatus, procedure, and computer program product for planning and simulating an internet protocol network | |
US20190280962A1 (en) | High performance software-defined core network | |
US20200021514A1 (en) | High performance software-defined core network | |
US20190372889A1 (en) | High performance software-defined core network | |
US20190280963A1 (en) | High performance software-defined core network | |
US20200021515A1 (en) | High performance software-defined core network | |
US20190238450A1 (en) | High performance software-defined core network | |
US20200014615A1 (en) | High performance software-defined core network | |
US20190238449A1 (en) | High performance software-defined core network | |
WO2020018704A1 (en) | High performance software-defined core network | |
US7734175B2 (en) | Network configuring apparatus | |
US20180069780A1 (en) | Network routing using dynamic virtual paths in an overlay network | |
Kumar et al. | A programmable and managed software defined network | |
Ouamri et al. | Request delay and survivability optimization for software defined‐wide area networking (SD‐WAN) using multi‐agent deep reinforcement learning | |
US20170104635A1 (en) | Physical adjacency detection systems and methods | |
US9521066B2 (en) | vStack enhancements for path calculations | |
Kong et al. | Network nervous system: When multilayer telemetry meets AI-assisted service provisioning | |
Chen et al. | A dynamic security traversal mechanism for providing deterministic delay guarantee in SDN | |
Orzen et al. | Routing tables as big data searchable structures for achieving real-time session fault tolerant rerouting | |
EP3824603A1 (en) | High performance software-defined core network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20160721 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAX | Request for extension of the european patent (deleted) | ||
A4 | Supplementary search report drawn up and despatched |
Effective date: 20170823 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: H04L 12/725 20130101ALI20170817BHEP Ipc: H04L 12/721 20130101ALI20170817BHEP Ipc: H04L 29/08 20060101AFI20170817BHEP |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20180320 |