WO2015162619A1 - Managing link failures in software defined networks - Google Patents

Managing link failures in software defined networks Download PDF

Info

Publication number
WO2015162619A1
WO2015162619A1 PCT/IN2014/000271 IN2014000271W WO2015162619A1 WO 2015162619 A1 WO2015162619 A1 WO 2015162619A1 IN 2014000271 W IN2014000271 W IN 2014000271W WO 2015162619 A1 WO2015162619 A1 WO 2015162619A1
Authority
WO
WIPO (PCT)
Prior art keywords
data communication
network
network controller
sdn
communication path
Prior art date
Application number
PCT/IN2014/000271
Other languages
French (fr)
Inventor
Mohammed Javed PADINHAKARA
Pramod Shanbhag
Original Assignee
Hewlett-Packard Development Company, L.P.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett-Packard Development Company, L.P. filed Critical Hewlett-Packard Development Company, L.P.
Priority to PCT/IN2014/000271 priority Critical patent/WO2015162619A1/en
Publication of WO2015162619A1 publication Critical patent/WO2015162619A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0805Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability
    • H04L43/0811Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability by checking connectivity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0654Management of faults, events, alarms or notifications using network fault recovery
    • H04L41/0663Performing the actions predefined by failover planning, e.g. switching to standby network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies
    • H04L41/122Discovery or management of network topologies of virtualised topologies, e.g. software-defined networks [SDN] or network function virtualisation [NFV]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/34Signalling channels for network management communication
    • H04L41/342Signalling channels for network management communication between virtual entities, e.g. orchestrators, SDN or NFV entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/40Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/20Arrangements for monitoring or testing data switching networks the monitoring system or the monitored elements being virtualised, abstracted or software-defined entities, e.g. SDN or NFV

Definitions

  • a communication network includes one or more network devices, such as network switches and network routers, apart from other components, for the purpose of transferring information amongst the end users.
  • the information is transferred over the communication network in the form of digitized data packets.
  • data packets are received at one or more input ports of the network device and are forwarded through one or more output ports of the network device.
  • the forwarding is based on a communication path or a route to be used to send the data packet to a destination device.
  • the communication path or route to be used may in turn be based on the configuration of the communication network.
  • FIG. 1 schematically illustrates a network system implemented for managing link failures in a software defined network (SDN), according to an example of the present subject matter
  • Figures 2A and 2B illustrate a master network controller of the SDN, according to an example of the present subject matter
  • Figure 3 illustrates a slave network controller of the SDN, according to an example of the present subject matter
  • Figure 4 illustrates a signal flow in the SDN for managing link failures in the SDN, according to an example of the present subject matter
  • Figure 5 illustrates another signal flow in the SDN for managing link failures in the SDN, according to an example of the present subject matter
  • Figure 6 illustrates a method for managing link failures in the SDN, in accordance with an example of the present subject matter.
  • Figure 7 illustrates a network environment for managing link failures in the SDN, in accordance with an example of the present subject matter.
  • the control logic which determines forwarding rules or conditions that allow network devices to control the flow of data packets in communication paths of the SDN, is decoupled from network devices and resides on an external device, such as a network controller of the SDN.
  • the SDNs may be implemented, for example, based on the OpenFlow technology that simplifies the functioning, configuration and troubleshooting of the network devices.
  • the network controller provides the control logic to the network devices, such as switches coupled to the network controller, based on which data communication paths for the data packets in the communication network are decided for transferring the data packets to another network device or a destination device.
  • Data communication paths are also referred to as communication paths herein for simplicity.
  • a network controller to which a network device refers for its control logic may be referred to as a master network controller of the network device.
  • the network device In case a network device observes a link failure in a data communication path indicated by the master network controller, the network device sends an indication of failure to the master network controller. Upon receiving such an indication, the master network controller may compute an alternate data communication path, also referred to as alternate path, and may provide the same to the network device for transferring the data packets.
  • the latency involved in this approach is substantially equivalent to the time taken by a network device to notify a link failure in a communication path to the master network controller and by the master network controller to compute and send the alternate path back to the network device.
  • the network device could be regarded as non-functional. This can affect the flow of data packets at the network device as data packets may get dropped during this period. The latency and the consequent packet loss may affect the performance of the communication network.
  • OpenFlow provides for the master network controller to pre-compute an alternate path for each of the communication path of the SDN and provide the alternate path to the network device on receiving the indication of a link failure.
  • the latency is at least one round-trip-time (RTT).
  • RTT herein, may be understood as the time taken by a message to flow from the network device to the master network controller and back.
  • the master network controller performs numerous functions in the communication network. Also, the load on the master network controller changes dynamically based on the traffic in the communication network and is generally high. If link failures occur at instances when the master network controller is already handling a high amount of traffic, the master network controller may consume more time to provide the alternate path. Furthermore, the additional task of computing an alternate path for each of the communication paths, may add to the load of the master network controller resulting in further delays or dropping of data packets.
  • a set of data communication paths from amongst the communication paths of the SDN are obtained.
  • the set of data communication paths may comprise data communication paths for which alternate paths may be computed.
  • the set of data communication paths may be based on Quality-of-Service (QoS), network health parameters or inputs from an administrator of the SDN.
  • QoS Quality-of-Service
  • alternate paths may be computed for communication paths in the set of data communication paths by a processing resource other than the master network controller, for example, a slave network controller.
  • the slave network controller may be a redundant network controller implemented in the SDN for managing link failures or excess traffic.
  • FIG. 1 schematically illustrates a network environment 100 implemented to manage link failures in a SDN 102, according to an example of the present subject matter.
  • the data may be communicated between a plurality of host devices 104-1 , 104-4, 104-3, 104-N, collectively referred to as host devices 104.
  • the host devices 104 may include devices that allow transmission and reception of data to and from other host devices.
  • the host devices 104 may include, but are not limited to, mobile phones, smart phones, PDAs, tablets, desktop computers, laptops, servers, mainframe computers, and the like, belonging to an end user, such as an individual, a service provider, an organization or an enterprise.
  • the network environment 100 may be understood as a public or a private network system implementing the SDN 102 over which the host devices 104 may communicate with each other.
  • the SDN 102 may be configured to function based on OpenFlow communication methodology for communication of data.
  • the SDN 102 may be implemented as a wireless network or a wired network, or a combination thereof.
  • the SDN 102 can be an individual network or a collection of many such individual networks, interconnected with each other and functioning as a single large network, e.g., the Internet or an intranet.
  • the SDN 102 can be implemented as one of the different types of networks, such as local area network (LAN), wide area network (WAN), and such.
  • the SDN 102 may either be a dedicated network or a shared network, which represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), etc., to communicate with each other.
  • HTTP Hypertext Transfer Protocol
  • TCP/IP Transmission Control Protocol/Internet Protocol
  • the SDN 102 may also include individual networks, such as, but are not limited to, GSM network, UMTS network, LTE network, PCS network, TDMA network, CDMA network, NGN, PSTN, and ISDN.
  • the host devices 104 work on communication protocols that are compatible with the SDN 102 to which the host devices 104 are coupled.
  • the SDN 102 implements at least one master network controller 106 and one or more network devices 108-1 , 108-2, 108-3, 08-N for establishing communication paths between the host devices 104 for transferring of data in the form of data packets between the host devices 104.
  • the network devices 108-1 , 108-2, 108-3, 108-N are hereinafter referred to as network devices 108.
  • the communication paths between the host devices 104 may be enabled through the network devices 108 in a desired form of communication, for example, via dial-up connections, cable links, and digital subscriber lines (DSL), wireless or satellite links, or any other suitable form of communication.
  • DSL digital subscriber lines
  • the network devices 108 may include, but are not limited to network switches and network routers. Apart from such network devices 108, the network environment 100 may include network hubs, Host Bus Adaptors (HBAs) and other network entities.
  • the master network controller 106 can communicate with the network devices 108 for the purpose of providing the control logic to the network device 108, based on which the communication path for the data packets in the SDN 102 may be decided for transferring the data packets to another network device or an end host device.
  • control logic may be understood as the logic for controlling of forwarding behavior of the network devices network device 108 or flow of data packets through the network devices 108.
  • Figure 1 shows one master network controller 106
  • the SDN 102 may include more than one master network controller 106.
  • Each master network controller 106 can communicate with a group of network devices for controlling the flow of data packets through the respective group of network devices.
  • the master network controller 106 provides a control logic in the form of one or more flow entries, to each of the network devices 108.
  • Each flow entry includes fields related to a flow matching condition and an action to be performed at a network device 108.
  • the flow matching condition field may include sub-fields, such as source and destination Internet Protocol (IP) addresses, source and destination Media Access Control (MAC) addresses, source and destination port numbers, a type value, IP number, Virtual Local Area Network (VLAN) number, and so on. Some of the mentioned sub-fields correspond to sub-fields of a packet header of the data packet received by a network device 108.
  • the action field includes a predefined action rule, which is executed at the network device 108 for the purpose of forwarding the data packet.
  • the flow entries sent by the master network controller 106 are saved in a flow table at the network device 108.
  • a data packet received at an input port of a network device 108 may be from another network device 108 or a host device 04
  • the flow table is looked-up and the packet header of the data packet is compared with the flow matching conditions of the flow entries in the flow table.
  • the action associated with the matched flow entry is executed to forward the data packet to an output port of the network device 108, or the data packet is dropped or forwarded to the master network controller 106, in a generally known manner.
  • a network device 108 such as a switch 108-1
  • the data packets received by the switch 108-1 from input port 1 and having destination IP attribute in the packet header as 10.0.0.1 shall be forwarded to the output port 2 for further transferring the data packet to another switch or a host device 104.
  • the flow of data packets in the SDN 102 occurs along communication paths of the SDN 102 as determined by the master network controller 106.
  • Link failures may occur in any of the communication paths of the SDN 102 due to various reasons.
  • the output port to which the data packets are to be forwarded based on the flow entry may be down, for example, the output port may have been rendered malfunctional.
  • a link failure may be said to have occurred in the communication path determined by the flow entry provided by the master network controller 106 for the switch 108-1 .
  • the communication path determined by the flow entry provided by the master network controller 106 that experiences a link failure may be referred to as a failed communication path.
  • the master network controller 106 is notified of such link failures by a network device 108 associated with a failed communication path, i.e., the network device 108 to which a flow entry corresponding to the failed communication path is provided. Generally, based on such a notification, the master network controller 106 computes another communication path as an alternate to the failed communication path and provides the same to the network device 108. In an example, master network controller 106 may send an updated flow entry with a revised action field according to the availability of another output port, such that the data packet can be re-forwarded to that output port.
  • the latency involved in this methodology of addressing link failures may result in packet loss, delay in packet forwarding and may in turn adversely affect the performance and the QoS of the SDN.
  • the master network controller 106 may pre-compute alternate paths for each of the communication paths of the SDN 102 such that the latency is limited to one RTT. However, pre-computation of alternate paths for each of the communication paths often results in overutilization of the master network controller 106.
  • systems and methods for managing link failures in the SDN 102 based on computing alternate paths for a set of predetermined communication paths 1 10, referred to a set of paths 1 10, from amongst the communication paths of the SDN are described. Further, the systems and methods also describe computing the alternate paths by a computing resource other than the master network controller 106 to ensure that the utilization of the master network controller 106 does not exceed a predetermined threshold which may result in a drop in the QoS of the SDN 102.
  • the computing resource may be a slave network controller 1 12.
  • the slave network controller 112 in one example, may be a redundant network controller in the SDN 102.
  • the slave network controller 1 12 may have same or lower computational capabilities as compared to that of the master network controller 106.
  • the slave network controller 1 12 may be coupled to the network devices 108 associated with the set of paths 1 10.
  • the set of paths 1 10 may comprise communication paths of the SDN 102 that may be determined based on factors, such as QoS parameters, network health parameter and inputs from an administrator of the SDN 102. The process of determining the set of paths 1 10 from amongst the communication paths of the SDN 102 has been explained later in the specification.
  • the slave network controller 12 may detect a link failure in any of the communication path from amongst the set of paths 1 10, for example, based on an indication of failure received from a network devices 108 associated with the failed communication path.
  • the slave network controller 1 12 may compute and provide an alternate to the failed communication path to the master network controller 106.
  • the master network controller 106 is coupled to the network devices 108 associated with a failed communication path and receives the indication of failure.
  • the master network controller 106 may activate the alternate path in the network devices 108 upon receiving the indication of failure of the communication path.
  • the interaction of the master network controller 106, slave network controller 1 12 and the network devices 108 associated with the failed communication path has been elaborated subsequently with reference to signal flow diagrams depicted in figure 4 and figure 5.
  • a network device 108 may be coupled to more than one slave network controllers 1 12. Also, the network devices 108 may be coupled to different slave network controllers of the SDN 102.
  • the slave network controller 1 12 may be a master network controller for one or more network devices (not shown in figures). Further, in case of failure of the master network controller 106 of a network device 108, any one of the slave network controllers 12 may replace the master network controller 106 of that network device 108.
  • a network device 108 of the SDN may be coupled to a network controller in a slave mode or a master mode.
  • a network device 108 may be said to be coupled to the master network controller 106 in a master mode while the network device 108 may be said to be coupled to the slave network controller 1 12 in a slave mode.
  • the function of providing the control logic to a network device 108 may be performed by a master network controller 106 and not the slave network controller 1 12 of the network device 08.
  • a network controller may write flow entries to the flow table of a network devices 108, or, in other words, configure communication paths for those network devices 108 that are coupled to the network controller in a master mode.
  • the slave network controller 1 12 may compute alternate paths for the network devices 108 coupled to the slave network controller 1 12 in a slave mode, however, the alternate paths computed by the slave network controller 112 are configured on the network devices 108 by the master network controller 106.
  • FIG. 2A and 2B illustrate a master network controller of the SDN, according to an example of the present subject matter while figure 3 illustrates a slave network controller of the SDN, according to an example of the present subject matter.
  • the master network controller 106 include a processor 202 and a network monitoring module 204 and a topology configuration module 206, both coupled to the processor 202.
  • the processor 202 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions.
  • the processor(s) 202 is configured to fetch and execute computer- readable instructions stored in the memory.
  • processors may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software.
  • the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared.
  • explicit use of the term "processor” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (DSP) hardware, network processor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), read only memory (ROM) for storing software, random access memory (RAM), non-volatile storage.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • ROM read only memory
  • RAM random access memory
  • the network monitoring module 204 communicates with network devices 108 to identify a failed communication path of the SDN 102.
  • the network device 08 may send an indication of failure to the network monitoring module 204 in case a communication path associated with any one of the network devices 108 is rendered malfunctional.
  • the network monitoring module 204 may then determine whether the communication path is from amongst the set of paths 1 10 that have been pre-identified in the SDN 102. In one example, to ascertain whether the communication path is from amongst the set of paths 1 10, the network monitoring module 204 may maintain a list comprising the set of paths 1 10 in the master network controller 106. In one example, the set of paths 1 10 may be periodically updated. The periodic updating of the set of paths 1 10 has been discussed subsequently.
  • the network monitoring module 204 determines that the communication path is from amongst the set of paths 1 10, the network monitoring module 204 invokes the topology configuration module 206 to provide an alternate to the failed communication path to the network devices 108.
  • the topology configuration module 206 receives a network topology from a slave network controller, such as the slave network controller 12 of the network device 108.
  • the network topology may be understood as a map of a communication network, such the SDN 102 indicating how the various network devices of the communication network may be coupled to each other and to the one or more network controllers of the communication network to form different communication paths for forwarding the data packets from a source to a destination.
  • the network topology provided by the slave network controller is indicative of the alternate to the failed communication path. Accordingly, the topology configuration module 206 configures the network device 108 based on the network topology to provide the alternate path to the network devices 108.
  • the master network controller 106 performs the task of identifying the set of paths 1 10. This may be explained in reference to figure 2B that depicts interface(s) 208 coupled to the processor 202.
  • the interfaces 208 may include a variety of software and hardware interfaces that allow the master network controller 106 to interact with network devices 108 and with other network controllers, such as the slave network controller 1 2 of the SDN 102. Further, the interfaces 208 may enable the master network controller 106 to communicate with other communication and computing devices, such as host devices 104 and other network entities on the SDN 102.
  • the interfaces 204 may facilitate multiple communications within a wide variety of networks and protocol types, including wire networks, for example, LAN, cable, etc., and wireless networks, for example WLAN, cellular, satellite-based network, etc.
  • the interface(s) 208 allow the network monitoring module 204 to monitor the network devices 108, coupled to the master network controller 106, to determine if any of the communication paths associated with any of the network devices 108 may be included in the set of paths 1 10. For instance, network monitoring module 204 may identify certain communication paths to be prone to frequent link failures. For the purpose, the network monitoring module 204 may use historic data relating to status of such communication paths that may be stored as network data 210 in the master network controller 106. Such frequent failures may occur due to various reasons and affect the QoS of the SDN 102.
  • such communication paths may be included in the set of data communication paths so that a service license agreement (SLA) which defines a minimum QoS for the SDN 102 is delivered.
  • SLA service license agreement
  • the SLA and the QoS parameters may be stored as QoS data 212 in the master network controller 106.
  • the network monitoring module 204 may be coupled to other network entities, such as a network monitoring system (not shown in figures) associated with SDN 102 to identify the set of paths 1 0.
  • network monitoring systems are associated with a communication system to monitor the overall health and performance of the communication system. Such systems operate in a generally known manner to identify components that may have failed or whose performance may be below a predefined threshold.
  • the network monitoring system may provide inputs relating to the overall health and performance of the SDN 102, referred to as network health parameters, to the network monitoring module 204.
  • network health parameters may also be stored as the network data 210 in the master network controller 106. Based on the network health parameters received from the network monitoring system, the network monitoring module 204 may determine one or more communication paths that may be included in the set of paths 110.
  • the network monitoring module 204 may implement functionalities of a network monitoring system. In such an example implementation, the network monitoring module 204 may determine network health parameters without relying on an external network monitoring system and may identify the set of paths 1 10 based, on the network health parameters it determines.
  • the set of paths 110 may be identified based on inputs from an administrator of the SDN 102.
  • the administrator may define the set of paths 1 10 to comprise communication paths that handle communication between a group of privileged end users, for example, a group of financial institutions of a group of inter-governmental institutions.
  • the administrator may define short-lived flows in the master network controller 106. Short-lived flows are transient flows that are defined for a predefined time duration between some network device 108 of the SDN 102. Based on the purpose for which a short-lived flow is defined, for example, in case a short-lived flows has been defined to provide a hotline between two end-users, the communication paths associated with the shortlived flows may be included in the set of paths 10.
  • communication paths that handle high priority data may form the set of paths 1 0.
  • the network monitoring module 204 may identify one or more communication paths to carry high priority data based on an end application associated with such data.
  • communication paths that carry VoIP (voice over Internet Protocol) data may be identified as set of paths 1 10.
  • data packet encrypted according to a predetermined data security protocol may be identified as high priority data, such as data pertaining to financial transactions and communication paths that transfer such data may be included in the set of paths 1 10.
  • the list comprising set of paths 1 10 in the data 218 of the master network controller 106 is periodically updated by the network monitoring module 204 based on the set of paths 1 10 that have been identified by the network monitoring module 204 in the above described manner.
  • a periodicity of updating the list comprising the set of paths 1 10 and a number of communication paths that may be included in the set of paths 1 10 may be defined by the administrator of the SDN 102.
  • the master network controller 106 performs numerous other functions of the SDN 102 for which the master network controller 106 may comprise additional modules and data.
  • the master network controller 106 comprises a memory 214 coupled to the processor 202, and modules 216 and data 218 that may reside in the memory 214.
  • the modules 216 may include a control- module 220 and other modules 222 in addition to the aforementioned network monitoring module 204 and topology configuration module 206.
  • the data 218 may include other data 224 in addition to the set of paths 1 10, network data 210 and QoS data 212.
  • the memory 214 may include any computer-readable medium known in the art including, for example, volatile memory (e.g., RAM), and/or non-volatile memory (e.g., EPROM, flash memory, etc.).
  • the modules 216 include routines, programs, objects, components, data structures, and the like, which perform particular tasks or implement particular abstract data types.
  • the modules 216 further include modules that supplement applications on the master network controller 106, for example, modules of an operating system.
  • the data 218 serves, amongst other things, as a repository for storing data that may be fetched, processed, received, or generated by one or more of the modules 216.
  • control module 220 may compute control logic for all network devices 108 coupled to the master network controller 106.
  • control logic is in the form of flow entries that may be written onto a flow table of a network device 108 by the topology configuration module 206.
  • control module 220 may compute alternate paths in case of failure of a communication path that is not included in the set of paths 1 0 and provide the same to the topology configuration module 206 for configuration of the alternate path in an associate network device 108.
  • the other module(s) 222 may include programs or coded instructions that supplement applications and functions, for example, programs in the operating system of the master network controller 106, and the other data 224 comprise data corresponding to one or more other module(s) 222.
  • FIG. 3 illustrates a slave network controller, such as the aforementioned slave network controller 112, according to an example of the present subject matter.
  • the slave network controller 12 may be a redundant network controller that may be implemented in the SDN 102 as a fallback to the master network controller 106 and may be called upon to perform functions of the master network controller 106 in case of failure of the master network controller 106.
  • the slave network controller 12 may be a redundant network controller that may be implemented in the SDN 102 for managing link failures or excess traffic.
  • the slave network controller 1 12 comprises a processor 302, interface(s) 304, and memory 306 coupled to the processor 302.
  • the processor 302, interface(s) 304, and memory 306 of the slave network controller 1 12 may be implemented in a manner similar to the processor 202, interface(s) 208, and memory 214, respectively, of the master network controller 106, and may operate likewise.
  • the slave network controller 1 12 comprises modules 308 and data 310.
  • the modules 308 include a network monitoring module 312, a topology generation module 314, and other module(s) 316 while the data 310 may include the set of paths 1 0, network data 318, QoS data 320 and other data 322.
  • the slave network controller 112 may be coupled to the network devices 108 to monitor the network devices 108 to determine a link failure in a communication path.
  • the network monitoring module 312 of the slave network controller 112 monitors the network devices 108 to identify a failed communication path in a manner similar to the network monitoring module 204 of the master network controller 106.
  • the network monitoring module 3 2 of the slave network controller 1 12 may identify the set of paths 10 from amongst the communication path of the SDN 102.
  • the network monitoring module 312 may monitor the various network devices 108, for example, in interaction with the network monitoring system.
  • the network monitoring module 312 Upon determining the failed communication path to be from amongst the set of paths 110, the network monitoring module 312 calls upon the topology generation module 314 of the slave network controller 1 12.
  • the topology generation module 314 computes a network topology that includes an alternate to the failed communication path and provides the same to the topology configuration module 206 of the master network controller 06 that in turn configures the network topology in the SDN 102.
  • the slave network controller 112 coupled to the network device 108 in a slave mode, may be coupled to another set of network devices (not shown in the figures) in a master mode.
  • the slave network controller 1 12 may include a control module (not shown in the figures), alike the control module 220 of the master network controller 106, to compute control logic for the set of network devices and a topology configuration module (not shown in the figures), alike the topology configuration module 206 of the master network controller 106, to configure the set of network devices based on the control logic so computed.
  • the interaction between the master network controller 106 and the slave network controller 1 12 for managing link failures in the SDN 102 is further described in conjunction with the signal flow diagrams illustrated in figure 4 and figure 5 in accordance with one example of the present subject matter.
  • the various arrow indicators used in the signal flow diagrams depict the transfer of information between the network device 108, the master network controller 106 of the network device 108, and one or more slave network controllers 112-1 , 112-2, 112-n of the network device 108.
  • multiple network entities besides those shown may lay between and the network device 108 and the master network controller 106 or the slave network controller 1 12-1 , 1 12-
  • a first path from amongst the set of paths 110 may be considered to have failed and a second and a third path from amongst the plurality of paths of the SDN 102 may be identified in for being provided with as alternate to the first path, in accordance with the present subject matter. It will be appreciated by one skilled in the art that the concepts explained in context of the first and a second path may be extended to any communication path of the SDN 102 in a similar manner.
  • a first slave network controller 1 12-1 provides the second path to the master network controller 106 of the network device 108.
  • the master network controller 106 may deploy the second path on the network device 108, at step 404.
  • the topology configuration module 206 of the master network controller 106 may write a flow entry for the second path in a flow table of the network device 108 without activating the same.
  • the flow entry for second path may have the same matching condition as that of the first path but have a lower priority than that of the first path.
  • the master network controller 106 may activate the second path in the network device 108 upon receiving a first path failure indication at step 406, the master network controller 106 may activate the second path in the network device 108.
  • the topology configuration module 206 may assign a priority of the failed path, i.e., the first path, in the present case, to the second path.
  • the topology configuration module 206 may delete the flow entry corresponding to the failed path, i.e., the first path, from the flow table of the network device 108. As evident, since the second path has the same matching condition as that of the first path and the flow entry corresponding to the first path that was assigned a higher priority is deleted, the second path gets activated and acts as an alternate to the first path.
  • the first slave network controller 1 2-1 provides an indication to compute the third path to the second slave network controller 1 12- 2.
  • the first slave network controller 1 12-1 is coupled to the network device 108 and may accordingly provide the indication the second slave network controller 1 12-2 upon detecting that the second path has been activated.
  • the second slave network controller 112-2 computes and provides the third path to the master network controller 106 who in turn deploys the third path on the network device 108, at step 414.
  • a fallback to the second path i.e., the second path in the present case, is available for the network device 108 to avoid any latency in case of a failure of the second path.
  • the master network controller 106 at step 418, activates the third path.
  • an indication to compute yet another alternate path may be received by the Nth slave network controller 12-n, at step 420, in response to which the slave network controller 1 2-n may provide the Nth path to the master network controller 106 at step 422 for deployment and activation in a manner explained above.
  • Figure 5 depicts a signal flow similar to the shown in figure 4.
  • the deployment and activation of an alternate path may occur upon failure of the first path.
  • the deployment and activation may take place simultaneously or in quick successions.
  • the network device Once an alternate path is deployed as well as activated on a network device, the network device may be said to have been configured with the alternate path wherein the deployment and activation may occur either at the same instance of time, as shown in figure 5, or at different instances as shown in figure 4.
  • the first path failure indication is received at step 502, in response to which the first slave network controller 1 12-1 provides the second path to the master network controller 106 at step 504.
  • the master network controller 06 may then, at step 506, configure the second path on the network device 108.
  • the topology configuration module 206 of the master network controller 106 may write a flow entry for the second path in the flow table of the network device 108 and activating the same, for example, by deleting the flow entry corresponding to the first path.
  • the first slave network controller 1 12-1 provides an indication to compute the third path to the second slave network controller 1 12-2 as discussed earlier with respect to step 410 of figure 4.
  • the master network controller 106 may receive the third path from the second slave network controller 1 12-2, at step 512, and configure the third path on the network device 108 at step 514.
  • Steps 516 to 522 may be further carried out in a manner as explained above to manage multiple link failures efficiently in the SDN 102.
  • Figure 6 illustrates method 600 for managing link failures in a SDN 102, in accordance with an example of the present subject matter.
  • the order in which the method 600 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method 600, or an alternative method. Additionally, individual blocks may be deleted from the method 600 without departing from the spirit and scope of the subject matter described herein.
  • the method 600 can be implemented in any suitable hardware, software, firmware, or combination thereof.
  • steps of the method 600 can be performed by programmed computing devices.
  • program storage devices for example, digital data storage media, which are machine or computer readable and encode machine-executable or computer-executable programs of instructions, wherein said instructions perform some or all of the steps of the described method.
  • the program storage devices may be, for example, digital memories, magnetic storage media, such as a magnetic disks and magnetic tapes, hard drives, or optically readable digital data storage media.
  • the examples are also intended to cover both communication network and communication system configured to perform said steps of the example method.
  • the method 600 for managing link failures may be implemented in a variety of communication systems working in different communication network environments, in examples described in figure 6, the method 600 is explained in context of the aforementioned SDN 102 for the ease of understanding.
  • a first slave network controller of a network device of the SDN obtains a set of paths from amongst a plurality of data communication paths in the network topology of the SDN.
  • the set of paths may be identified by the first slave network controller, while, in other example the slave network controller may obtain the set of paths from a master network controller of the network device.
  • the set of paths comprises those communication paths of the SDN that may be given priority over other communication paths of the SDN for being provided with an alternate path in case of a link failure in any of the communication paths included in the set of paths.
  • the first slave network controller determines a second path from amongst the plurality of data communication paths of the SDN wherein the second path is an alternate to the first data communication path.
  • the first slave network controller provides the second data communication path to a master network controller of the network device.
  • the master network controller configures the network device to transfer data packets through the second data communication path in case of a failure of the first data communication path to manage link failures in the SDN.
  • the first slave network controller may provide an indication to a second slave network controller of the network device to determine a third communication path from amongst the plurality of data communication paths of the SDN.
  • the third communication path is another alternate to the first communication path and may also be provided to the master network controller of the network device.
  • the master network controller may configure the network device to transfer data packets through the third communication path in case of a failure of the second communication path.
  • Figure 7 illustrates an example network environment 700 implementing a non-transitory computer readable medium for managing link failures in a SDN, in accordance with an example of the present subject matter.
  • the network environment 700 may be a public networking environment or a private networking environment.
  • the network environment 700 includes a processing resource 702 communicatively coupled to a non-transitory computer readable medium 704 through a communication link 706.
  • the processing resource 702 can be a processor of a slave network controller, such as the slave network controller 112.
  • the non- transitory computer readable medium 704 can be, for example, an internal memory device or an external memory device.
  • the communication link 706 may be a direct communication link, such as one formed through a memory read/write interface.
  • the communication link 706 may be an indirect communication link, such as one formed through a network interface.
  • the processing resource 702 can access the non-transitory computer readable medium 704 through a network 708.
  • the network 708, may be a single network or a combination of multiple networks and may use a variety of different communication protocols.
  • the processing resource 702 and the non-transitory computer readable medium 704 may also be communicatively coupled to data sources 710 over the network 708.
  • the data sources 710 can include, for example, databases and computing devices.
  • the data sources 710 may be used by an administrator of the SDN and other users to communicate with the processing resource 702.
  • the non-transitory computer readable medium 704 includes a set of computer readable instructions, such as instructions for implementing the network monitoring module 312 and the topology generation module 314.
  • the set of computer readable instructions can be accessed by the processing resource 702 through the communication link 706 and subsequently executed to perform acts for managing link failures in the SDN.
  • the instructions can cause the processing resource 702 to obtain a set of paths 1 10 that have been pre-identified in the SDN 102 for being provided with an alternate path upon their failure.
  • processing resource 702 may identify the set of paths 1 10 from amongst a plurality of communication path of the SDN.
  • the processing resource 702 may determine transient flows existing in the SDN to identify the set of paths 1 10.
  • the processing resource 702 may monitor the SDN to identify communication paths that experience a volume of traffic greater than a predefined threshold.
  • the processing resource 702 may also monitor the SDN to identify communication paths associated with applications that have a priority greater than a predefined threshold associated with them.
  • the threshold for the volume of traffic and priority of end applications supported by the SDN may be defined by an administrator of the SDN.
  • the processing resource 702 may identify such communication paths based volume of traffic and priority of end applications and include them in the set of paths 1 10.
  • the instructions can cause the processing resource 702 to compute a second communication path as an alternate to first communication path.
  • the alternate path may be provided it to a network controller coupled to the network device 108 in a master mode for configuring the alternate path onto the network device 108.
  • the instructions can also cause the processing resource 702 to trigger a slave network controller of the network device to identify another alternate to the first communication path, i.e., a third communication path.
  • a third communication path may be utilized for data transfer upon a failure of the second communication path.
  • the methods and systems of the present subject matter provide for minimizing the latency and computational resources involved in managing link failures.
  • implementations for the network environment 100 and the SDN 102 have been described in language specific to structural features and/or methods, it is to be understood that the appended claims are not necessarily limited to the specific features or methods described. Rather, the specific features and methods are disclosed as example implementations for managing link failures SDN networks.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Environmental & Geological Engineering (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Examples of techniques to manage link failures in a Software Defined Network (SDN) are described. A set of communication paths of the SDN are obtained. For a network device associated with a first data communication path, from amongst the set of communication paths, a second data communication path is provided to a master network controller of the network device. The second data communication path is an alternate to the first data communication path. The computation of the second data communication path is carried out in a processing resource that is connected to the network device in a slave mode.

Description

MANAGING LINK FAILURES IN SOFTWARE DEFINED NETWORKS
BACKGROUND
[0001] Communication networks are vastly utilized and relied upon across the globe to share information between two or more end users. A communication network includes one or more network devices, such as network switches and network routers, apart from other components, for the purpose of transferring information amongst the end users.
[0002] The information is transferred over the communication network in the form of digitized data packets. At a network device, data packets are received at one or more input ports of the network device and are forwarded through one or more output ports of the network device. The forwarding is based on a communication path or a route to be used to send the data packet to a destination device. The communication path or route to be used may in turn be based on the configuration of the communication network.
BRIEF DESCRIPTION OF DRAWINGS
[0003] The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the figures to reference like features and components:
[0004] Figure 1 schematically illustrates a network system implemented for managing link failures in a software defined network (SDN), according to an example of the present subject matter;
[0005] Figures 2A and 2B illustrate a master network controller of the SDN, according to an example of the present subject matter;
[0006] Figure 3 illustrates a slave network controller of the SDN, according to an example of the present subject matter;
[0007] Figure 4 illustrates a signal flow in the SDN for managing link failures in the SDN, according to an example of the present subject matter; [0008] Figure 5 illustrates another signal flow in the SDN for managing link failures in the SDN, according to an example of the present subject matter;
[0009] Figure 6 illustrates a method for managing link failures in the SDN, in accordance with an example of the present subject matter; and
[0010] Figure 7 illustrates a network environment for managing link failures in the SDN, in accordance with an example of the present subject matter.
DETAILED DESCRIPTION
[0011] In software defined networks (SDN), the control logic, which determines forwarding rules or conditions that allow network devices to control the flow of data packets in communication paths of the SDN, is decoupled from network devices and resides on an external device, such as a network controller of the SDN. The SDNs may be implemented, for example, based on the OpenFlow technology that simplifies the functioning, configuration and troubleshooting of the network devices.
[0012] Thus, in a SDN network, the network controller provides the control logic to the network devices, such as switches coupled to the network controller, based on which data communication paths for the data packets in the communication network are decided for transferring the data packets to another network device or a destination device. Data communication paths are also referred to as communication paths herein for simplicity. A network controller to which a network device refers for its control logic may be referred to as a master network controller of the network device.
[0013] In case a network device observes a link failure in a data communication path indicated by the master network controller, the network device sends an indication of failure to the master network controller. Upon receiving such an indication, the master network controller may compute an alternate data communication path, also referred to as alternate path, and may provide the same to the network device for transferring the data packets. The latency involved in this approach is substantially equivalent to the time taken by a network device to notify a link failure in a communication path to the master network controller and by the master network controller to compute and send the alternate path back to the network device. During the latency period, the network device could be regarded as non-functional. This can affect the flow of data packets at the network device as data packets may get dropped during this period. The latency and the consequent packet loss may affect the performance of the communication network.
[0014] To avoid such failures in the communication network, OpenFlow provides for the master network controller to pre-compute an alternate path for each of the communication path of the SDN and provide the alternate path to the network device on receiving the indication of a link failure. However, even in situations where the master network controller pre-computes the alternate path, the latency is at least one round-trip-time (RTT). The RTT, herein, may be understood as the time taken by a message to flow from the network device to the master network controller and back.
[0015] Further, in general, the master network controller performs numerous functions in the communication network. Also, the load on the master network controller changes dynamically based on the traffic in the communication network and is generally high. If link failures occur at instances when the master network controller is already handling a high amount of traffic, the master network controller may consume more time to provide the alternate path. Furthermore, the additional task of computing an alternate path for each of the communication paths, may add to the load of the master network controller resulting in further delays or dropping of data packets.
[0016] Aspects of systems and methods relating to managing link failures in a SDN are described herein. The aspects of the systems and methods assist in reducing the latency in a SDN. In an example, to manage link failures in the SDN, a set of data communication paths from amongst the communication paths of the SDN are obtained. For example, the set of data communication paths may comprise data communication paths for which alternate paths may be computed. In one example, the set of data communication paths may be based on Quality-of-Service (QoS), network health parameters or inputs from an administrator of the SDN. [0017] In an example implementation, alternate paths may be computed for communication paths in the set of data communication paths by a processing resource other than the master network controller, for example, a slave network controller. In one example, the slave network controller may be a redundant network controller implemented in the SDN for managing link failures or excess traffic.
[0018] Computing alternate paths for a set of identified communication paths as opposed to for each of the communication paths of the SDN avoids overutilization of the computation resources of network controllers and allows a set of paths to be prioritized over other communication paths of the SDN in case of link failures. Also, since the set of communication paths is identified based on parameters, such as QoS, link failures can be addressed effectively to meet the desired QoS without utilizing additional computation resources. Further, the computation of the alternate paths is performed by the slave network controller, thereby reducing computational load on the master network controller and reducing the time involved in computing the alternate paths.
[0019] The manner in which the systems and methods for managing link failures in SDN can be implemented shall be explained in details with respect to Figures 1 to 7. While aspects of described systems and methods for managing link failures in SDN can be implemented in any number of different computing systems, environments, and/or configurations, the examples are described in the context of the following example system(s).
[0020] Figure 1 schematically illustrates a network environment 100 implemented to manage link failures in a SDN 102, according to an example of the present subject matter. In the SDN 102, the data may be communicated between a plurality of host devices 104-1 , 104-4, 104-3, 104-N, collectively referred to as host devices 104. The host devices 104 may include devices that allow transmission and reception of data to and from other host devices. The host devices 104 may include, but are not limited to, mobile phones, smart phones, PDAs, tablets, desktop computers, laptops, servers, mainframe computers, and the like, belonging to an end user, such as an individual, a service provider, an organization or an enterprise. The network environment 100 may be understood as a public or a private network system implementing the SDN 102 over which the host devices 104 may communicate with each other. In an example implementation, the SDN 102 may be configured to function based on OpenFlow communication methodology for communication of data.
[0021] The SDN 102 may be implemented as a wireless network or a wired network, or a combination thereof. The SDN 102 can be an individual network or a collection of many such individual networks, interconnected with each other and functioning as a single large network, e.g., the Internet or an intranet. The SDN 102 can be implemented as one of the different types of networks, such as local area network (LAN), wide area network (WAN), and such. The SDN 102 may either be a dedicated network or a shared network, which represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), etc., to communicate with each other. The SDN 102 may also include individual networks, such as, but are not limited to, GSM network, UMTS network, LTE network, PCS network, TDMA network, CDMA network, NGN, PSTN, and ISDN. The host devices 104 work on communication protocols that are compatible with the SDN 102 to which the host devices 104 are coupled.
[0022] The SDN 102 implements at least one master network controller 106 and one or more network devices 108-1 , 108-2, 108-3, 08-N for establishing communication paths between the host devices 104 for transferring of data in the form of data packets between the host devices 104. For the sake of simplicity the network devices 108-1 , 108-2, 108-3, 108-N are hereinafter referred to as network devices 108. Further, the communication paths between the host devices 104 may be enabled through the network devices 108 in a desired form of communication, for example, via dial-up connections, cable links, and digital subscriber lines (DSL), wireless or satellite links, or any other suitable form of communication.
[0023] The network devices 108 may include, but are not limited to network switches and network routers. Apart from such network devices 108, the network environment 100 may include network hubs, Host Bus Adaptors (HBAs) and other network entities. The master network controller 106 can communicate with the network devices 108 for the purpose of providing the control logic to the network device 108, based on which the communication path for the data packets in the SDN 102 may be decided for transferring the data packets to another network device or an end host device. Thus, control logic may be understood as the logic for controlling of forwarding behavior of the network devices network device 108 or flow of data packets through the network devices 108. Although, Figure 1 shows one master network controller 106, the SDN 102 may include more than one master network controller 106. Each master network controller 106 can communicate with a group of network devices for controlling the flow of data packets through the respective group of network devices.
[0024] The master network controller 106 provides a control logic in the form of one or more flow entries, to each of the network devices 108. Each flow entry includes fields related to a flow matching condition and an action to be performed at a network device 108. The flow matching condition field may include sub-fields, such as source and destination Internet Protocol (IP) addresses, source and destination Media Access Control (MAC) addresses, source and destination port numbers, a type value, IP number, Virtual Local Area Network (VLAN) number, and so on. Some of the mentioned sub-fields correspond to sub-fields of a packet header of the data packet received by a network device 108. Further, the action field, includes a predefined action rule, which is executed at the network device 108 for the purpose of forwarding the data packet. The flow entries sent by the master network controller 106 are saved in a flow table at the network device 108.
[0025] Generally, in an SDN 102, as a data packet received at an input port of a network device 108 may be from another network device 108 or a host device 04, the flow table is looked-up and the packet header of the data packet is compared with the flow matching conditions of the flow entries in the flow table. Depending upon the result of the comparison, either the action associated with the matched flow entry is executed to forward the data packet to an output port of the network device 108, or the data packet is dropped or forwarded to the master network controller 106, in a generally known manner.
[0026] To illustrate with an example, a network device 108, such as a switch 108-1 , may include three input and output ports. The master network controller 106 may send, to the switch 108-1 , a flow entry comprising the flow matching condition field with sub-fields: 'Input Port = 1 ' and 'Destination IP = 10.0.0.1 '; and the action field with an action rule: Output Port = 2'. With such a flow entry, the data packets received by the switch 108-1 from input port 1 and having destination IP attribute in the packet header as 10.0.0.1 shall be forwarded to the output port 2 for further transferring the data packet to another switch or a host device 104. Thus, the flow of data packets in the SDN 102 occurs along communication paths of the SDN 102 as determined by the master network controller 106.
[0027] Link failures may occur in any of the communication paths of the SDN 102 due to various reasons. For example, the output port to which the data packets are to be forwarded based on the flow entry may be down, for example, the output port may have been rendered malfunctional. Referring to the above example, if the output port 2 is down, a link failure may be said to have occurred in the communication path determined by the flow entry provided by the master network controller 106 for the switch 108-1 . The communication path determined by the flow entry provided by the master network controller 106 that experiences a link failure may be referred to as a failed communication path.
[0028] The master network controller 106 is notified of such link failures by a network device 108 associated with a failed communication path, i.e., the network device 108 to which a flow entry corresponding to the failed communication path is provided. Generally, based on such a notification, the master network controller 106 computes another communication path as an alternate to the failed communication path and provides the same to the network device 108. In an example, master network controller 106 may send an updated flow entry with a revised action field according to the availability of another output port, such that the data packet can be re-forwarded to that output port. The latency involved in this methodology of addressing link failures may result in packet loss, delay in packet forwarding and may in turn adversely affect the performance and the QoS of the SDN. In some cases, the master network controller 106 may pre-compute alternate paths for each of the communication paths of the SDN 102 such that the latency is limited to one RTT. However, pre-computation of alternate paths for each of the communication paths often results in overutilization of the master network controller 106.
[0029] In accordance with one example implementation of the present disclosure, systems and methods for managing link failures in the SDN 102 based on computing alternate paths for a set of predetermined communication paths 1 10, referred to a set of paths 1 10, from amongst the communication paths of the SDN are described. Further, the systems and methods also describe computing the alternate paths by a computing resource other than the master network controller 106 to ensure that the utilization of the master network controller 106 does not exceed a predetermined threshold which may result in a drop in the QoS of the SDN 102.
[0030] In an example implementation of the present subject matter, the computing resource may be a slave network controller 1 12. The slave network controller 112, in one example, may be a redundant network controller in the SDN 102. In an example, the slave network controller 1 12 may have same or lower computational capabilities as compared to that of the master network controller 106. The slave network controller 1 12 may be coupled to the network devices 108 associated with the set of paths 1 10. In one example, the set of paths 1 10 may comprise communication paths of the SDN 102 that may be determined based on factors, such as QoS parameters, network health parameter and inputs from an administrator of the SDN 102. The process of determining the set of paths 1 10 from amongst the communication paths of the SDN 102 has been explained later in the specification.
[0031] The slave network controller 12 may detect a link failure in any of the communication path from amongst the set of paths 1 10, for example, based on an indication of failure received from a network devices 108 associated with the failed communication path. The slave network controller 1 12 may compute and provide an alternate to the failed communication path to the master network controller 106. As mentioned previously, the master network controller 106, too, is coupled to the network devices 108 associated with a failed communication path and receives the indication of failure. The master network controller 106 may activate the alternate path in the network devices 108 upon receiving the indication of failure of the communication path. The interaction of the master network controller 106, slave network controller 1 12 and the network devices 108 associated with the failed communication path has been elaborated subsequently with reference to signal flow diagrams depicted in figure 4 and figure 5.
[0032] Although the implementation illustrated in figure 1 depicts only one slave network controller 1 12, it would be appreciated that a network device 108 may be coupled to more than one slave network controllers 1 12. Also, the network devices 108 may be coupled to different slave network controllers of the SDN 102. In an example implementation, the slave network controller 1 12 may be a master network controller for one or more network devices (not shown in figures). Further, in case of failure of the master network controller 106 of a network device 108, any one of the slave network controllers 12 may replace the master network controller 106 of that network device 108.
[0033] Further, a network device 108 of the SDN may be coupled to a network controller in a slave mode or a master mode. For example, a network device 108 may be said to be coupled to the master network controller 106 in a master mode while the network device 108 may be said to be coupled to the slave network controller 1 12 in a slave mode. In one example, the function of providing the control logic to a network device 108 may be performed by a master network controller 106 and not the slave network controller 1 12 of the network device 08. Accordingly, a network controller may write flow entries to the flow table of a network devices 108, or, in other words, configure communication paths for those network devices 108 that are coupled to the network controller in a master mode. On the other hand, the slave network controller 1 12 may compute alternate paths for the network devices 108 coupled to the slave network controller 1 12 in a slave mode, however, the alternate paths computed by the slave network controller 112 are configured on the network devices 108 by the master network controller 106.
[0034] Further operations of the master network controller 106 and the slave network controller 1 12 are discussed in reference to Figures 2A, 2B and 3. Figures 2A and 2B illustrate a master network controller of the SDN, according to an example of the present subject matter while figure 3 illustrates a slave network controller of the SDN, according to an example of the present subject matter.
[0035] In accordance with an example implementation illustrated in Figures 2A, the master network controller 106 include a processor 202 and a network monitoring module 204 and a topology configuration module 206, both coupled to the processor 202.
[0036] The processor 202 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor(s) 202 is configured to fetch and execute computer- readable instructions stored in the memory.
[0037] The functions of the various elements shown in the figure, including any functional blocks labeled as "processor(s)", may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term "processor" should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (DSP) hardware, network processor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), read only memory (ROM) for storing software, random access memory (RAM), non-volatile storage. Other hardware, conventional and/or custom, may also be included. [0038] In operation, the network monitoring module 204 communicates with network devices 108 to identify a failed communication path of the SDN 102. For example, the network device 08 may send an indication of failure to the network monitoring module 204 in case a communication path associated with any one of the network devices 108 is rendered malfunctional.
[0039] The network monitoring module 204 may then determine whether the communication path is from amongst the set of paths 1 10 that have been pre-identified in the SDN 102. In one example, to ascertain whether the communication path is from amongst the set of paths 1 10, the network monitoring module 204 may maintain a list comprising the set of paths 1 10 in the master network controller 106. In one example, the set of paths 1 10 may be periodically updated. The periodic updating of the set of paths 1 10 has been discussed subsequently.
[0040] In case the network monitoring module 204 determines that the communication path is from amongst the set of paths 1 10, the network monitoring module 204 invokes the topology configuration module 206 to provide an alternate to the failed communication path to the network devices 108. In one example, to provide the alternate to the failed communication path to the network devices 108, the topology configuration module 206 receives a network topology from a slave network controller, such as the slave network controller 12 of the network device 108. The network topology may be understood as a map of a communication network, such the SDN 102 indicating how the various network devices of the communication network may be coupled to each other and to the one or more network controllers of the communication network to form different communication paths for forwarding the data packets from a source to a destination. In an example, the network topology provided by the slave network controller is indicative of the alternate to the failed communication path. Accordingly, the topology configuration module 206 configures the network device 108 based on the network topology to provide the alternate path to the network devices 108.
[0041] As explained above, alternate paths are computed for the set of paths 1 10 that have been pre-identified in the SDN 102. Accordingly, amongst other functionalities, the master network controller 106 performs the task of identifying the set of paths 1 10. This may be explained in reference to figure 2B that depicts interface(s) 208 coupled to the processor 202. The interfaces 208 may include a variety of software and hardware interfaces that allow the master network controller 106 to interact with network devices 108 and with other network controllers, such as the slave network controller 1 2 of the SDN 102. Further, the interfaces 208 may enable the master network controller 106 to communicate with other communication and computing devices, such as host devices 104 and other network entities on the SDN 102. The interfaces 204 may facilitate multiple communications within a wide variety of networks and protocol types, including wire networks, for example, LAN, cable, etc., and wireless networks, for example WLAN, cellular, satellite-based network, etc.
[0042] The interface(s) 208 allow the network monitoring module 204 to monitor the network devices 108, coupled to the master network controller 106, to determine if any of the communication paths associated with any of the network devices 108 may be included in the set of paths 1 10. For instance, network monitoring module 204 may identify certain communication paths to be prone to frequent link failures. For the purpose, the network monitoring module 204 may use historic data relating to status of such communication paths that may be stored as network data 210 in the master network controller 106. Such frequent failures may occur due to various reasons and affect the QoS of the SDN 102. In one example, such communication paths may be included in the set of data communication paths so that a service license agreement (SLA) which defines a minimum QoS for the SDN 102 is delivered. In one example, the SLA and the QoS parameters may be stored as QoS data 212 in the master network controller 106.
[0043] In one example, the network monitoring module 204 may be coupled to other network entities, such as a network monitoring system (not shown in figures) associated with SDN 102 to identify the set of paths 1 0. Generally, network monitoring systems are associated with a communication system to monitor the overall health and performance of the communication system. Such systems operate in a generally known manner to identify components that may have failed or whose performance may be below a predefined threshold. The network monitoring system may provide inputs relating to the overall health and performance of the SDN 102, referred to as network health parameters, to the network monitoring module 204. In one example, network health parameters may also be stored as the network data 210 in the master network controller 106. Based on the network health parameters received from the network monitoring system, the network monitoring module 204 may determine one or more communication paths that may be included in the set of paths 110.
[0044] In an example, the network monitoring module 204 may implement functionalities of a network monitoring system. In such an example implementation, the network monitoring module 204 may determine network health parameters without relying on an external network monitoring system and may identify the set of paths 1 10 based, on the network health parameters it determines.
[0045] In an example implementation, the set of paths 110 may be identified based on inputs from an administrator of the SDN 102. For example, the administrator may define the set of paths 1 10 to comprise communication paths that handle communication between a group of privileged end users, for example, a group of financial institutions of a group of inter-governmental institutions. In some examples, the administrator may define short-lived flows in the master network controller 106. Short-lived flows are transient flows that are defined for a predefined time duration between some network device 108 of the SDN 102. Based on the purpose for which a short-lived flow is defined, for example, in case a short-lived flows has been defined to provide a hotline between two end-users, the communication paths associated with the shortlived flows may be included in the set of paths 10.
[0046] In one example, communication paths that handle high priority data may form the set of paths 1 0. For example, the network monitoring module 204 may identify one or more communication paths to carry high priority data based on an end application associated with such data. In one example, communication paths that carry VoIP (voice over Internet Protocol) data may be identified as set of paths 1 10. In other examples, data packet encrypted according to a predetermined data security protocol may be identified as high priority data, such as data pertaining to financial transactions and communication paths that transfer such data may be included in the set of paths 1 10.
[0047] The list comprising set of paths 1 10 in the data 218 of the master network controller 106, is periodically updated by the network monitoring module 204 based on the set of paths 1 10 that have been identified by the network monitoring module 204 in the above described manner. In one example, a periodicity of updating the list comprising the set of paths 1 10 and a number of communication paths that may be included in the set of paths 1 10 may be defined by the administrator of the SDN 102.
[0048] Further to identifying the set of paths 1 10 and configuring the alternate paths for the set of paths 10, the master network controller 106 performs numerous other functions of the SDN 102 for which the master network controller 106 may comprise additional modules and data. In the illustrated example implementation, the master network controller 106 comprises a memory 214 coupled to the processor 202, and modules 216 and data 218 that may reside in the memory 214. The modules 216 may include a control- module 220 and other modules 222 in addition to the aforementioned network monitoring module 204 and topology configuration module 206. Likewise, the data 218 may include other data 224 in addition to the set of paths 1 10, network data 210 and QoS data 212.
[0049] The memory 214 may include any computer-readable medium known in the art including, for example, volatile memory (e.g., RAM), and/or non-volatile memory (e.g., EPROM, flash memory, etc.). The modules 216 include routines, programs, objects, components, data structures, and the like, which perform particular tasks or implement particular abstract data types. The modules 216 further include modules that supplement applications on the master network controller 106, for example, modules of an operating system. The data 218 serves, amongst other things, as a repository for storing data that may be fetched, processed, received, or generated by one or more of the modules 216.
[0050] In an implementation, the control module 220 may compute control logic for all network devices 108 coupled to the master network controller 106. In one example, the control logic is in the form of flow entries that may be written onto a flow table of a network device 108 by the topology configuration module 206. Additionally, the control module 220 may compute alternate paths in case of failure of a communication path that is not included in the set of paths 1 0 and provide the same to the topology configuration module 206 for configuration of the alternate path in an associate network device 108. The other module(s) 222 may include programs or coded instructions that supplement applications and functions, for example, programs in the operating system of the master network controller 106, and the other data 224 comprise data corresponding to one or more other module(s) 222.
[0051] Figure 3 illustrates a slave network controller, such as the aforementioned slave network controller 112, according to an example of the present subject matter. In one example, the slave network controller 12 may be a redundant network controller that may be implemented in the SDN 102 as a fallback to the master network controller 106 and may be called upon to perform functions of the master network controller 106 in case of failure of the master network controller 106. In another example, the slave network controller 12 may be a redundant network controller that may be implemented in the SDN 102 for managing link failures or excess traffic.
[0052] In one example, the slave network controller 1 12 comprises a processor 302, interface(s) 304, and memory 306 coupled to the processor 302. The processor 302, interface(s) 304, and memory 306 of the slave network controller 1 12 may be implemented in a manner similar to the processor 202, interface(s) 208, and memory 214, respectively, of the master network controller 106, and may operate likewise. In the illustrated example implementation, the slave network controller 1 12 comprises modules 308 and data 310. The modules 308 include a network monitoring module 312, a topology generation module 314, and other module(s) 316 while the data 310 may include the set of paths 1 0, network data 318, QoS data 320 and other data 322.
[0053] As mentioned previously, the slave network controller 112 may be coupled to the network devices 108 to monitor the network devices 108 to determine a link failure in a communication path. In operation, the network monitoring module 312 of the slave network controller 112 monitors the network devices 108 to identify a failed communication path in a manner similar to the network monitoring module 204 of the master network controller 106.
[0054] Further, in one example, the network monitoring module 3 2 of the slave network controller 1 12 may identify the set of paths 10 from amongst the communication path of the SDN 102. For the purpose, the network monitoring module 312 may monitor the various network devices 108, for example, in interaction with the network monitoring system.
[0055] Upon determining the failed communication path to be from amongst the set of paths 110, the network monitoring module 312 calls upon the topology generation module 314 of the slave network controller 1 12. The topology generation module 314 computes a network topology that includes an alternate to the failed communication path and provides the same to the topology configuration module 206 of the master network controller 06 that in turn configures the network topology in the SDN 102.
[0056] In some example implementations, the slave network controller 112, coupled to the network device 108 in a slave mode, may be coupled to another set of network devices (not shown in the figures) in a master mode. In such example implementations, the slave network controller 1 12 may include a control module (not shown in the figures), alike the control module 220 of the master network controller 106, to compute control logic for the set of network devices and a topology configuration module (not shown in the figures), alike the topology configuration module 206 of the master network controller 106, to configure the set of network devices based on the control logic so computed.
[0057] The interaction between the master network controller 106 and the slave network controller 1 12 for managing link failures in the SDN 102 is further described in conjunction with the signal flow diagrams illustrated in figure 4 and figure 5 in accordance with one example of the present subject matter. The various arrow indicators used in the signal flow diagrams depict the transfer of information between the network device 108, the master network controller 106 of the network device 108, and one or more slave network controllers 112-1 , 112-2, 112-n of the network device 108. In many cases, multiple network entities besides those shown may lay between and the network device 108 and the master network controller 106 or the slave network controller 1 12-1 , 1 12-
2, 112-n, although those have been omitted for clarity. Similarly, various acknowledgement and confirmation network responses may also have be omitted for the sake of brevity of explanation. Further, in the figure 4 and 5 the term 'network controller' has been replaced by the term 'controller' and the term 'communication path' has been replaced by 'path' for the sake of brevity.
[0058] The following description is explained with reference to failure of a first and a second path of the SDN 102. In the illustrated example implementation, a first path from amongst the set of paths 110 may be considered to have failed and a second and a third path from amongst the plurality of paths of the SDN 102 may be identified in for being provided with as alternate to the first path, in accordance with the present subject matter. It will be appreciated by one skilled in the art that the concepts explained in context of the first and a second path may be extended to any communication path of the SDN 102 in a similar manner.
[0059] At step 402, a first slave network controller 1 12-1 provides the second path to the master network controller 106 of the network device 108. In one example, the master network controller 106 may deploy the second path on the network device 108, at step 404. To deploy the second path, the topology configuration module 206 of the master network controller 106 may write a flow entry for the second path in a flow table of the network device 108 without activating the same. For example, the flow entry for second path may have the same matching condition as that of the first path but have a lower priority than that of the first path. At step 408, upon receiving a first path failure indication at step 406, the master network controller 106 may activate the second path in the network device 108. In an example, to activate the second path in the network device 108, the topology configuration module 206 may assign a priority of the failed path, i.e., the first path, in the present case, to the second path. In another example, to activate the second path in the network device 108, the topology configuration module 206 may delete the flow entry corresponding to the failed path, i.e., the first path, from the flow table of the network device 108. As evident, since the second path has the same matching condition as that of the first path and the flow entry corresponding to the first path that was assigned a higher priority is deleted, the second path gets activated and acts as an alternate to the first path.
[0060] At step 410, the first slave network controller 1 2-1 provides an indication to compute the third path to the second slave network controller 1 12- 2. Apparently, the first slave network controller 1 12-1 is coupled to the network device 108 and may accordingly provide the indication the second slave network controller 1 12-2 upon detecting that the second path has been activated. At step 412, the second slave network controller 112-2 computes and provides the third path to the master network controller 106 who in turn deploys the third path on the network device 108, at step 414. Thus, a fallback to the second path, i.e., the second path in the present case, is available for the network device 108 to avoid any latency in case of a failure of the second path. Accordingly, at step 416, when a second path failure indication is received, the master network controller 106, at step 418, activates the third path.
[0061] It will be understood that the signal flow as discussed above, may continue to manage multiple link failures in a path. Accordingly, an indication to compute yet another alternate path may be received by the Nth slave network controller 12-n, at step 420, in response to which the slave network controller 1 2-n may provide the Nth path to the master network controller 106 at step 422 for deployment and activation in a manner explained above.
[0062] Figure 5 depicts a signal flow similar to the shown in figure 4. However, in accordance with the example implementation shown in figure 5, the deployment and activation of an alternate path may occur upon failure of the first path. In accordance with the example shown in figure 5, the deployment and activation may take place simultaneously or in quick successions. Once an alternate path is deployed as well as activated on a network device, the network device may be said to have been configured with the alternate path wherein the deployment and activation may occur either at the same instance of time, as shown in figure 5, or at different instances as shown in figure 4.
[0063] As shown, the first path failure indication is received at step 502, in response to which the first slave network controller 1 12-1 provides the second path to the master network controller 106 at step 504. The master network controller 06 may then, at step 506, configure the second path on the network device 108. To configure the second path, the topology configuration module 206 of the master network controller 106 may write a flow entry for the second path in the flow table of the network device 108 and activating the same, for example, by deleting the flow entry corresponding to the first path.
[0064] Further, at step 508, the first slave network controller 1 12-1 provides an indication to compute the third path to the second slave network controller 1 12-2 as discussed earlier with respect to step 410 of figure 4. Thereupon, at step 510, when a second path failure indication is received, the master network controller 106, may receive the third path from the second slave network controller 1 12-2, at step 512, and configure the third path on the network device 108 at step 514. Steps 516 to 522 may be further carried out in a manner as explained above to manage multiple link failures efficiently in the SDN 102.
[0065] The words 'during', 'while', 'when', and 'upon' as used herein are not exact terms that mean as action takes place instantly upon an initiating action but that there may be some small but reasonable delay, such as propagation delay, between the initial action and the reaction that is initiated by the initial action. Additionally, the word 'coupled' is used throughout for clarity of the description and can include either a direct coupling or an indirect coupling.
[0066] Figure 6 illustrates method 600 for managing link failures in a SDN 102, in accordance with an example of the present subject matter. The order in which the method 600 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method 600, or an alternative method. Additionally, individual blocks may be deleted from the method 600 without departing from the spirit and scope of the subject matter described herein. Furthermore, the method 600 can be implemented in any suitable hardware, software, firmware, or combination thereof.
[0067] A person skilled in the art will readily recognize that steps of the method 600 can be performed by programmed computing devices. Herein, some examples are also intended to cover program storage devices, for example, digital data storage media, which are machine or computer readable and encode machine-executable or computer-executable programs of instructions, wherein said instructions perform some or all of the steps of the described method. The program storage devices may be, for example, digital memories, magnetic storage media, such as a magnetic disks and magnetic tapes, hard drives, or optically readable digital data storage media. The examples are also intended to cover both communication network and communication system configured to perform said steps of the example method.
[0068] Further, although the method 600 for managing link failures may be implemented in a variety of communication systems working in different communication network environments, in examples described in figure 6, the method 600 is explained in context of the aforementioned SDN 102 for the ease of understanding.
[0069] At block 602, a first slave network controller of a network device of the SDN obtains a set of paths from amongst a plurality of data communication paths in the network topology of the SDN. As explained previously, in one example, the set of paths may be identified by the first slave network controller, while, in other example the slave network controller may obtain the set of paths from a master network controller of the network device. In an example, the set of paths comprises those communication paths of the SDN that may be given priority over other communication paths of the SDN for being provided with an alternate path in case of a link failure in any of the communication paths included in the set of paths. [0070] At block 604, the first slave network controller determines a second path from amongst the plurality of data communication paths of the SDN wherein the second path is an alternate to the first data communication path.
[0071] At block 606, the first slave network controller provides the second data communication path to a master network controller of the network device. The master network controller configures the network device to transfer data packets through the second data communication path in case of a failure of the first data communication path to manage link failures in the SDN.
[0072] Further, the first slave network controller may provide an indication to a second slave network controller of the network device to determine a third communication path from amongst the plurality of data communication paths of the SDN. The third communication path is another alternate to the first communication path and may also be provided to the master network controller of the network device. The master network controller may configure the network device to transfer data packets through the third communication path in case of a failure of the second communication path. This approach provides for continuity in managing link failures in the SDN since a fallback for an alternate path is also computed for being used in an event of failure of the alternate path.
[0073] Figure 7 illustrates an example network environment 700 implementing a non-transitory computer readable medium for managing link failures in a SDN, in accordance with an example of the present subject matter. The network environment 700 may be a public networking environment or a private networking environment. In one implementation, the network environment 700 includes a processing resource 702 communicatively coupled to a non-transitory computer readable medium 704 through a communication link 706.
[0074] For example, the processing resource 702 can be a processor of a slave network controller, such as the slave network controller 112. The non- transitory computer readable medium 704 can be, for example, an internal memory device or an external memory device. In one implementation, the communication link 706 may be a direct communication link, such as one formed through a memory read/write interface. In another implementation, the communication link 706 may be an indirect communication link, such as one formed through a network interface. In such a case, the processing resource 702 can access the non-transitory computer readable medium 704 through a network 708. The network 708, may be a single network or a combination of multiple networks and may use a variety of different communication protocols.
[0075] The processing resource 702 and the non-transitory computer readable medium 704 may also be communicatively coupled to data sources 710 over the network 708. The data sources 710 can include, for example, databases and computing devices. The data sources 710 may be used by an administrator of the SDN and other users to communicate with the processing resource 702.
[0076] In one implementation, the non-transitory computer readable medium 704 includes a set of computer readable instructions, such as instructions for implementing the network monitoring module 312 and the topology generation module 314. The set of computer readable instructions, referred to as instructions hereinafter, can be accessed by the processing resource 702 through the communication link 706 and subsequently executed to perform acts for managing link failures in the SDN.
[0077] For discussion purposes, the execution of the instructions by the processing resource 702 have been described with reference to various components introduced earlier with reference to description of figures 2a, 2b, and 3.
[0078] In an example, the instructions can cause the processing resource 702 to obtain a set of paths 1 10 that have been pre-identified in the SDN 102 for being provided with an alternate path upon their failure. In an example, processing resource 702 may identify the set of paths 1 10 from amongst a plurality of communication path of the SDN. For example, the processing resource 702 may determine transient flows existing in the SDN to identify the set of paths 1 10. In yet another example, the processing resource 702 may monitor the SDN to identify communication paths that experience a volume of traffic greater than a predefined threshold. The processing resource 702 may also monitor the SDN to identify communication paths associated with applications that have a priority greater than a predefined threshold associated with them. In one example, the threshold for the volume of traffic and priority of end applications supported by the SDN may be defined by an administrator of the SDN. The processing resource 702 may identify such communication paths based volume of traffic and priority of end applications and include them in the set of paths 1 10.
[0079] Further, for a network device 108 coupled to the processing resource 702 in a slave mode and associated with a first communication path from amongst the set of paths 110, the instructions can cause the processing resource 702 to compute a second communication path as an alternate to first communication path. As explained previously, the alternate path may be provided it to a network controller coupled to the network device 108 in a master mode for configuring the alternate path onto the network device 108.
[0080] The instructions can also cause the processing resource 702 to trigger a slave network controller of the network device to identify another alternate to the first communication path, i.e., a third communication path. As explained previously, the third communication path may be utilized for data transfer upon a failure of the second communication path.
[0081] Thus, the methods and systems of the present subject matter provide for minimizing the latency and computational resources involved in managing link failures. Although implementations for the network environment 100 and the SDN 102 have been described in language specific to structural features and/or methods, it is to be understood that the appended claims are not necessarily limited to the specific features or methods described. Rather, the specific features and methods are disclosed as example implementations for managing link failures SDN networks.

Claims

I/We claim:
1. A method for managing a Software Defined Network (SDN), the method comprising:
obtaining, by a first slave network controller of a network device of the SDN, a set of data communication paths from amongst a plurality of data communication paths in the SDN;
determining, for a first data communication path from amongst the set of data communication paths, a second data communication path from amongst the plurality of data communication paths as an alternate to the first data communication path, by the first slave network controller; and
providing, by the first slave network controller, the second data communication path to a master network controller of the network device to configure the network device to transfer data packets through the second data communication path upon a failure of the first data communication path.
2. The method as claimed in claim 1 further comprising providing, by the first slave network controller, an indication to determine a third data communication path from amongst the plurality of data communication paths, to a second slave network controller of the network device, wherein the third data communication path is another alternate to the first data communication path.
3. The method as claimed in claim 1 , wherein the providing is based on receiving, by the first slave network controller, an indication of a failure of the first data communication path.
4. The method as claimed in claim 1 , wherein obtaining the set of data communication paths comprises identifying the set of data communication paths from amongst the plurality of data communication paths based on at least one of network health parameters and QoS parameters.
5. The method as claimed in claim 1 , wherein obtaining the set of data communication paths comprises receiving the set of data communication paths from one of the master network controller, an administrator of the SDN and a network health monitoring system.
6. A master network controller for a Software Defined Network (SDN), the master network controller comprising:
a processor;
a network monitoring module, coupled to the processor, to:
communicate with a network device to identify a failed data communication path; and
determine whether the failed data communication path is from amongst a set of predetermined data communication paths of the SDN; a topology configuration module, coupled to the processor, to:
receive, upon determining the failed data communication path to be from amongst the set of predetermined data communication paths, a network topology from a slave network controller of the network device, wherein the network topology provides an alternate to the failed data communication path; and
configure the network device based on the network topology.
7. The master network controller as claimed in claim 6, wherein to configure the network device based on the network topology, the topology configuration module assigns a priority of the failed data communication path to an alternate path provided by the network topology.
8. The master network controller as claimed in claim 6, wherein to configure the network device based on the network topology, the topology configuration module deletes an flow entry corresponding to the failed data communication path from a flow table of the network device.
9. The master network controller as claimed in claim 6, wherein the network monitoring module triggers the topology configuration module to configure the network device upon receiving an indication of failure of the failed data communication path.
10. The master network controller as claimed in claim 6, wherein the set of predetermined data communication paths is identified from amongst a plurality of data communication paths in a network topology of the SDN based on at least one of network health parameters and QoS parameters.
11. The master network controller as claimed in claim 6, wherein the set of predetermined data communication paths is periodically updated.
12. A non-transitory computer-readable medium comprising instructions for managing link failures in a Software Defined Network (SDN),' executable by a processing resource to:
obtain a set of data communication paths of the SDN;
provide, for a network device associated with a first data communication path from amongst the set of communication paths, a second data communication path to a master network controller of the network device, the second data communication path being an alternate to the first data communication path,
wherein the network device is connected to the processing resource in a slave mode.
13. The non-transitory computer-readable medium as claimed in claim 12 further comprising instructions executable to:
trigger a network controller coupled to the network device in the slave mode to identify a third data communication path to the master network controller of the network device, wherein the third data communication path is an alternate to the first data communication path.
14. The non-transitory computer-readable medium as claimed in claim 12 further comprising instructions executable to: identify the set of data communication paths from amongst a plurality of data communication paths of the SDN based on one or more transient flows existing in the SDN.
15. The non-transitory computer-readable medium as claimed in claim 12 further comprising instructions executable to:
identify the set of data communication paths from amongst a plurality of data communication paths of the SDN based on one or more of a priority of end application and a volume of traffic supported by the plurality of data communication paths.
PCT/IN2014/000271 2014-04-25 2014-04-25 Managing link failures in software defined networks WO2015162619A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/IN2014/000271 WO2015162619A1 (en) 2014-04-25 2014-04-25 Managing link failures in software defined networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IN2014/000271 WO2015162619A1 (en) 2014-04-25 2014-04-25 Managing link failures in software defined networks

Publications (1)

Publication Number Publication Date
WO2015162619A1 true WO2015162619A1 (en) 2015-10-29

Family

ID=54331829

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IN2014/000271 WO2015162619A1 (en) 2014-04-25 2014-04-25 Managing link failures in software defined networks

Country Status (1)

Country Link
WO (1) WO2015162619A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106899505A (en) * 2017-04-19 2017-06-27 天津微梦无界科技有限公司 A kind of controllable software self-replication transmission method
CN108270669A (en) * 2016-12-30 2018-07-10 中兴通讯股份有限公司 Business recovery device, master controller, the system and method for SDN network
WO2019006708A1 (en) * 2017-07-05 2019-01-10 全球能源互联网研究院有限公司 Sdn multi-domain network backup method and system based on dual-port switch
US20220060966A1 (en) * 2018-12-18 2022-02-24 Telefonaktiebolaget Lm Ericsson (Publ) Method and Controller for Managing a Microwave Network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020152320A1 (en) * 2001-02-14 2002-10-17 Lau Pui Lun System and method for rapidly switching between redundant networks
US20030110409A1 (en) * 2001-12-07 2003-06-12 Alan Gale Method and apparatus for network fault correction via adaptive fault router
US20050138476A1 (en) * 2003-12-23 2005-06-23 Bellsouth Intellectual Property Corporation Method and system for prioritized rerouting of logical circuit data in a data network
US20100220736A1 (en) * 2009-02-27 2010-09-02 Cisco Technology, Inc Advertising alternate paths at border gateway protocol route reflectors

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020152320A1 (en) * 2001-02-14 2002-10-17 Lau Pui Lun System and method for rapidly switching between redundant networks
US20030110409A1 (en) * 2001-12-07 2003-06-12 Alan Gale Method and apparatus for network fault correction via adaptive fault router
US20050138476A1 (en) * 2003-12-23 2005-06-23 Bellsouth Intellectual Property Corporation Method and system for prioritized rerouting of logical circuit data in a data network
US20100220736A1 (en) * 2009-02-27 2010-09-02 Cisco Technology, Inc Advertising alternate paths at border gateway protocol route reflectors

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108270669A (en) * 2016-12-30 2018-07-10 中兴通讯股份有限公司 Business recovery device, master controller, the system and method for SDN network
CN106899505A (en) * 2017-04-19 2017-06-27 天津微梦无界科技有限公司 A kind of controllable software self-replication transmission method
WO2019006708A1 (en) * 2017-07-05 2019-01-10 全球能源互联网研究院有限公司 Sdn multi-domain network backup method and system based on dual-port switch
US20220060966A1 (en) * 2018-12-18 2022-02-24 Telefonaktiebolaget Lm Ericsson (Publ) Method and Controller for Managing a Microwave Network

Similar Documents

Publication Publication Date Title
EP3275125B1 (en) Flow-specific failure detection in sdn networks
US8537720B2 (en) Aggregating data traffic from access domains
EP3588865B1 (en) Event ingestion management
EP3075101B1 (en) Dynamically optimized many tree multicast networks
EP2781062A1 (en) System and method for using dynamic allocation of virtual lanes to alleviate congestion in a fat-tree topology
US10164845B2 (en) Network service aware routers, and applications thereof
WO2015106729A1 (en) A load balancing method, device, system and computer storage medium
US9866436B2 (en) Smart migration of monitoring constructs and data
WO2014202026A1 (en) Method and system for virtual network mapping protection and computer storage medium
US20130286824A1 (en) Data communication in openflow networks
US10250528B2 (en) Packet prediction in a multi-protocol label switching network using operation, administration, and maintenance (OAM) messaging
WO2015162619A1 (en) Managing link failures in software defined networks
EP3399424B1 (en) Using unified api to program both servers and fabric for forwarding for fine-grained network optimizations
US9912592B2 (en) Troubleshooting openflow networks
JP2013206112A (en) Computer system and sub-system management method
Chen et al. Enterprise visor: A Software-Defined enterprise network resource management engine
US8964596B1 (en) Network service aware routers, and applications thereof
Chang et al. Using sdn technology to mitigate congestion in the openstack data center network
US20140233394A1 (en) Packet prediction in a multi-protocol label switching network using openflow messaging
Nguyen et al. An openflow-based scheme for service Chaining’s high availability in cloud network
GB2578453A (en) Software defined networks

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14890169

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14890169

Country of ref document: EP

Kind code of ref document: A1