GB2542632A - Multicast network system and method - Google Patents

Multicast network system and method Download PDF

Info

Publication number
GB2542632A
GB2542632A GB1517118.4A GB201517118A GB2542632A GB 2542632 A GB2542632 A GB 2542632A GB 201517118 A GB201517118 A GB 201517118A GB 2542632 A GB2542632 A GB 2542632A
Authority
GB
United Kingdom
Prior art keywords
stream
source
network
address
streams
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1517118.4A
Other versions
GB2542632B (en
GB201517118D0 (en
Inventor
Butler David
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
British Broadcasting Corp
Original Assignee
British Broadcasting Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by British Broadcasting Corp filed Critical British Broadcasting Corp
Priority to GB1517118.4A priority Critical patent/GB2542632B/en
Publication of GB201517118D0 publication Critical patent/GB201517118D0/en
Publication of GB2542632A publication Critical patent/GB2542632A/en
Application granted granted Critical
Publication of GB2542632B publication Critical patent/GB2542632B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/16Multipoint routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/185Arrangements for providing special services to substations for broadcast or conference, e.g. multicast with management of multicast group membership
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/42Centralised routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/64Routing or path finding of packets in data switching networks using an overlay routing layer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/245Traffic characterised by specific attributes, e.g. priority or QoS using preemption
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Programmable switches are controlled so as to avoid duplicating streams when delivering multicast audio-video streams in a network (fig. 9). A request for a multicast stream is received from a destination, the request including a source IP address, multicast group IP address and destination IP address. One or more paths from the source to the destination are determined and existing streams are analysed to determine whether a stream having the same source IP address and multicast group IP address already exists on any of the links of the one or more paths. Rules are programmed into switches in the one or more paths such that for each switch only one combination of a given source IP address and multicast group IP address exists on each output port of the switch. The stream analysis may be performed at a proxy that is implemented on a server or within a network controller. The network may be an OpenFlow (RTM) Software Defined Network (SDN) and the requests may be Internet Group Management Protocol (IGMP) messages. The switches may be further programmed to include rules regarding priority and network congestion to supplement the rules regarding duplication of streams.

Description

MULTICAST NETWORK SYSTEM AND METHOD BACKGROUND OF THE INVENTION
This invention relates to a delivery of content over a network of the type comprising switches controllable from a controller such as a software defined network (SDN). A particular use case is the delivery of multi-cast audio-video content over a such a network.
In a computer network, video or data can be delivered to from a source to a destination as singlecast or multicast streams, as shown in Figures 1 and 2. If a source is sending the same video or data to multiple destinations, the source must send multiple copies of the stream, 1 per destination. There are multiple copies of the same stream on links between network devices. This increases congestion and uses up network capacity. With multicast streams, the source sends out a single stream to a multicast address, which can be received by multiple destinations.
In older networks, multiple paths through the network were disabled using the Spanning Tree Protocol or Rapid Spanning Tree Protocol. Loops in the network could result in packet storms, with packets re-circulating around the network. In modern networks, protocols such as ECMP, SPB and TRILL allow multiple network paths and the creation of mesh networks. In a mesh network consisting of N interconnected network devices, there N -1 paths available between any 2 points on the network. For example, in a network with 4 network devices, as shown in Figure 3, there are 3 paths. There is a direct path between the source and destination network devices and 2 paths via alternate network devices. In effect, this triples the network capacity available between the sources and destinations. In a traditional network, the network control is distributed. There is load balancing, but depending on network device behaviour, individual streams could be split across multiple paths.
If more streams are added, beyond the available link capacity, one or more of the existing streams will become congested, as shown in Figure 4.
In a traditional network, QoS and Differential Services can be enabled. However, these are packet based rather stream based. Packet loss, out of order packets, latency and delay jitter will increase. The situation is more complicated with multicast streams as each stream could have multiple destinations. With multiple sources and multiple destinations joining and leaving the streams, it is difficult to predict where congestion will occur and which streams will be affected.
In the context of audio-video production, production users joining a stream for monitoring purposes could disrupt a live production stream being used for broadcast.
In an OpenFlow SDN mesh network, there is a centralised network control, which configures match, action and meter rules in the network devices. The network can be controlled using the concept of streams. This allows stream based load balancing and stream based prioritisation, where an individual stream (Source:Group) can have priority value. Based on the priority value, lower priority streams can be dropped from the network, without impacting other streams, as shown in Figure 5.
As shown, if the stream for Source1:Group1 has a lower priority than the other streams, the Sourcel :Group1 stream will be removed and the Source4:Group4 stream added. Network capacity will be maintained and the other streams will not be affected. It is also possible to pre-allocate bandwidth and implement redundant (back up copy) streams for any multicast stream.
SUMMARY OF THE INVENTION
We have appreciated the need for appropriate control, in networks such as software defined networks, to ensure that multi-cast audio-video streams are appropriately delivered. We have particularly appreciated the need to avoid inefficient duplication of streams.
In broad terms, the invention provides for control of switches in a network for multicast audio-video streams such that duplication of streams between two switches in the network may be avoided. In a sense, the streams are arranged such that unique flows are arranged between any two switches in the network.
In one aspect, the invention provides a method for controlling multi-cast audiovideo streams in a network of the type comprising switches controllable from a controller comprising receiving a request for a stream from a destination, the request including a source IP address, group IP address and destination IP address. The method determines one or paths from the source to the destination and then analyses existing streams to determine whether a stream having the same source IP address and multi-cast group IP address already exists on any of the links of the determined one or more paths. This is performed by programming rules into the switches in the one or more paths such that for each switch only one combination of a given source IP address and multi-cast group IP address exists on each output port of the switch.
In an embodiment of the invention, switches may be programmed on receipt of each request for multi-cast content so as to avoid duplication of streams. In addition, the switches may be further programmed to include rules regarding priority and network congestion that may supplement the rule regarding duplication of streams.
Preferably, the programming of switches is achieved by routing of requests for multi-cast content to a proxy. The proxy may be provided as a separate program on a server or as part of a network controller. The proxy may include tracking the source, multi-cast group and destination. The tracking may be using tables that are monitored so as to ensure that only unique combinations of source IP address and multi-cast group IP address exist on each output port of each switch.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention will now be described in more detail by way of example with reference to the drawings, in which:
Figure 1: shows a logical diagram of singlecast traffic streams;
Figure 2: shows a logical diagram of multi-cast traffic streams;
Figure 3: shows a logical diagram of a mesh network;
Figure 4: shows a logical diagram of a mesh network with overloaded network links;
Figure 5: shows a logical diagram of a mesh network with stream load balancing and prioritisation;
Figure 6: shows IGMPv3 message exchange;
Figure 7: shows an external IGMP proxy in server that may embody the invention;
Figure 8: shows internal IGMP application in controller that may embody the invention;
Figure 9: shows a network embodying the invention;
Figure 10: shows the main routines operated by proxy software;
Figure 11: shows a top level flow diagram of proxy software;
Figure 12: shows the enforcement of unique flows per link;
Figure 13: shows unique flows per link with 2 unique flows;
Figure 14: shows multicast flows with load balance & priority;
Figure 15: shows multicast flows with priority rule applied;
Figure 16: shows stream tracking with priority rules;
Figure 17: shows a process to add stream flow with alternate path, capacity check and priority check;
Figure 18: shows a Check Stream Capacity Algorithm;
Figure 19: shows a Check Stream Priority Algorithm; and
Figure 20: shows a Remove Stream Algorithm with Direct and Diverted
Streams.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
The invention may be embodied in a variety of methods, systems programmes, routines or modules. The main embodiment of the functionality is referred to as a “proxy” that implements various monitoring routines to provide instructions to a controller and, in turn, to network switches. The proxy may be implemented on separate hardware such as a server attached to the network, or may be within a controller within the network.
Multi-Cast
The system embodying the invention is a multi-cast system and so some features of multi-cast systems will first be described.
In a Singlecast stream, the packet header contains the Ethernet MAC Address and IP Address of the source and the Ethernet MAC Address and IP Address of the destination. In a multicast stream, the packet header contains the source addresses, but a multicast specific destination MAC Address and IP Address are employed. The multicast destination IP Address is referred to as the Multicast Group Address.
Specific IP ranges are allocated for Multicast Group Addresses. For IPv4, the address ranges are defined in the IANA Guidelines for IPv4 Multicast Address Assignments. Different multicast ranges are employed for different uses.
The multicast IP Address ranges most relevant to multicast on a SDN are: 224.0. 0.0 - 224.0.0.255 (/24) Local Network Control Block 232.0. 0.0 - 232.255.255.255 (/8) Source-Specific Multicast Block
The multicast Ethernet MAC Address range is:
01:00:5E:00:00:00 - 01:00:5E:7F:FF:FF
The lower 23 Bits of the multicast IPv4 Address are inserted into the Ethernet MAC Address.
In a traditional network, multicast streams are joined using the Internet Group Management Protocol (IGMP). IGMP messages are sent to the following multicast destination IP addresses: 224.0. 0.1 Query message from the multicast local network control. 224.0. 0.22 Membership report from the joining destination.
For a destination to receive a video stream, the destination must join the multicast group. A destination joins a multicast group use the IGMP protocol. The IGMPv3 message exchange, shown in Figure 6, is quite simple.
The message exchange is initiated by either the network sending a Membership Query messages to the Membership Query multicast group or a destination sending a Membership Report to the Membership Report multicast address. Destinations join multicast streams by setting “Include” or “Allow Sources” in the multicast group records. Destinations leave multicast streams by setting “Exclude” or “Block Sources” in the multicast group records.
The multicast IP address of each group is provided by another mechanism, without using multicast protocols. The destination network stack and network interface card are configured for multicast support. Destinations are able to receive the multicast stream as soon as the Membership Report is sent, even before receiving the response.
In a traditional network, IGMP Membership Report messages from destinations are processed by multicast enabled network routers or multicast aware network devices.
Controllable Network
An embodiment uses multi-cast techniques using OpenFlow SDN.
In an OpenFlow SDN with centralised control, IGMP support can be added using an external IGMP proxy in a server, as shown in Figure 7 or by creating an internal IGMP application in the controller, as shown in Figure 8. The embodiment of the invention may be implemented by an IGMP proxy in a server or in a controller as in Figures 7 and 8.
In the implementation, as shown in Figure 7, using an IGMP proxy, IGMP Membership Reports from destinations are directed to the external server, which processes the messages and programmes flows through the SDN controller REST API. IGMP Membership Query messages are then sent in response to the destinations.
Multicast group rules and priorities may be configured through the IGMP Proxy API. The API allows capacity monitoring, flow visualisation and direct manual control of multicast groups. The applications on the source and destinations could also provide stream information such as stream multicast group addresses, stream average and peak bit rates, priority rules, redundant path requirements and other stream metadata through an IGMP Proxy or SDN Controller API. As destinations join and leave multicast groups, the IGMP proxy adds, modifies and deletes flows based on IGMP Membership Query messages and available network capacity.
In the implementation, as shown in Figure 8, for an internal IGMP SDN application, IGMP Membership Reports from destinations are forwarded directly to the SDN controller. The application in the controller then programs flows and creates IGMP Membership Query messages which are forwarded to the switches and on to the destinations. This uses an SDN controller that supports internal third party applications that can process, create and send packets to OpenFlow switches. Some SDN Controllers do not allow the use of packets responses over the southbound interface (from the controller to the switches). Responses over the southbound interface will increase the OpenFlow control traffic, but IGMP messages are less frequent than most protocols, so the OpenFlow management connection should not be congested.
Network Overview
Figure 9 shows an overview of a network embodying the invention which implements the variation in which the proxy is provided as a separate IGMP proxy server attached to the network. As shown, a variety of sources/destinations which include devices such as television cameras, audiovideo players, monitors, studio equipment and, or generally any apparatus capable of delivering or using multicast audio-video content, are connected to the network. The network itself comprises network switches shown here as SDN switches which are connected together by connections or links that may be electric, optical or indeed wireless connections. It is noted at this point for future reference that each switch has input and output ports. The route through a network from a given source to a given destination may therefore be defined by each switch and the output port of each switch. The output port defines the next switch in the mesh to which a message would be sent. The mesh network also includes a network controller which has connections to the switches and which is arranged to provide instructions to the switches. The instructions may be referred to as “programming” the switches.
The aspects of control of the network switches which provide the advantages of this disclosure are provided by a combination of the IGMP proxy server and the network controller. The proxy stores a “map” of the network in the sense of knowing the switches in the network, the output ports of the switches and the subsequent switches to which those output ports are connected.
An overview of the operation on the receiving a request for multicast content within the network is as follows.
In a first step, a request is received from one of the devices attached to the network, the request specifying the multicast stream. The multicast stream may be identified a source IP address and a group IP address. Preferably, the request is an IGMP message requesting the multicast stream. The IGMP message is sent to the proxy.
Second, the proxy receives the request. The request includes the multicast group IP, the source IP and also the destination IP address. As previously noted, the proxy holds data that defines the connections between switches in the network and this data is used by the proxy to determine at least one path from the source to the destination. For example, the proxy may determine a single shortest path from source to destination. Alternatively, as described later, a choice of paths may be determined based on priority and congestion rules. In the example of determining a single path, the switches required for that path and the output port of each switch is determined.
Third, the proxy sends a command to the controller specifying the source IP address, group IP address, the switches in the path and the output port of each switch. The controller sends instructions to the relevant switches, referred to as a match action instruction, to program each switch with this information. A given switch in the path would receive the instruction and then store a match action rule which specifies the source IP address, group IP address and the output port on which the switch should forward the stream. In this way, each switch in the path is provided with the appropriate action to take on receipt of the stream. The source IP address and group IP address uniquely specifies each multicast stream and so the path through a network for a given stream is thereby defined.
The process described above considers the situation of a multicast stream being requested for the first time within the network. As set out in the following sections of this disclosure, a variety of further techniques may be used alone or in combination to ensure network efficiency. The main process being controlling switches in such a way that there are not duplicate streams of the same multicast content on any given link between two switches on the network. To enforce this, the proxy maintains data, such as in the form of a table, specifying the source IP address, multicast group IP address, switch and output port. When a request for multicast content is received by the IGMP message described above, the proxy refers to this table as part of the process of determining the path through the network. If a given combination of source IP, group IP, switch and output port already exists in the table then an extra entry is not added. In this way, only unique combinations of source IP, group IP network switch and port are held in the table and provided to the controller. This ensures the uniqueness in each link within the network.
When determining the path for a given stream, the proxy will usually determine the most direct path. However, routines may be included to determine paths based on priority of a given flow and congestion on each link. For example, those involved in the production of audio-video content to be delivered over the network (a production team) can define priorities of content. For example, content may be defined as live or non-live, outside broadcast and so on. A respective priority may thereby be defined by a hierarchy using a graphical user interface. The network may also keep a measurement of the bit rate on each link within the network and, if greater than a particular threshold, the path selected may be chosen so as to avoid congestion. In this way, a combination of congestion and priority balancing may be provided within the context of never putting the same flow twice on a given link.
The IGMP messages themselves are low bit rate and so may be given a high priority within the network. These use unique numbers and so do not require matching on source IP address unlike the packets of actual audio-video content.
With the above background, the detailed modules and processes that implement the features described above will now be described with reference to figures 10 to 20.
Processes
The processes implemented by the module referred to as a proxy will now be described with reference to figures 10 to 20. The functionality may be considered to be an IGMP proxy algorithm.
The IGMP Proxy software is a programme that creates, modifies and deletes Source Specific Multicast flows using the REST interface on an OpenFlow SDN controller.
The IGMP Proxy Software is arranged to investigate flow creation and management for Source Specific Multicast on an OpenFlow Software Defined Network. As an example the IGMP Proxy software compiles and runs on an Ubuntu 14.04 LTS server with 2 network interfaces.
In the network, the SDN controller is accessible over the control network on the first interface (em1) and the SDN, with all the sources and destinations, is accessible via the second interface (em2). The SSM video source / destination servers are also connected to both the control and SDN, as shown in Figure 9. This is to allow remote access for test control, separate from the SDN.
On start-up, the IGMP Proxy software configures static ASM streams between the source / destination servers and the IGMP Proxy server. A low priority drop flow command for all SSM streamed video is also configured in all SDN switches. Any un-joined stream is dropped in the source switch and does not enter the mesh network.
Sources start streaming and any destinations join the stream by sending an IGMP Membership Report. The ASM flows deliver the Membership Report to the IGMP Proxy Server. The IGMP Proxy Server captures the message using PCAP and processes the message.
If the Message is a join, and there is no pre-existing flow to the destination for the stream Source and Group a SSM flow is configured to the destination and added to a stream list. The added stream is given a weight (bit rate value) and a priority value. A running total of the capacity on each which is maintained. If the message is a leave, and the flow exists, the flow is removed, the stream weight is deducted from the capacity total and the stream details are removed from the list.
If multiple destinations join and leave the stream, the flow is modified, with stream branches added and removed. The capacity total is reduced for the removed branch and the stream list for the branch is removed. Multiple source streams are balanced across the available paths, with lower priority streams removed if capacity is reached.
On shutdown the static ASM streams and low priority flow drops are removed from the switches.
The IGMP Proxy software is arranged to investigate multicast flow management on the SDN mesh with stream based load balancing and stream prioritisation.
The implementation may take into account the following considerations.
Use of PCAP Libaries for Packet Capture: in a traditional network, IGMP messages are sent to and from local control functions in multicast enabled routers. In a multicast enabled server, IGMP membership report messages are not passed to network socket routines by the server NIC. Packet capture using the PCAP library is therefore used, as it captures packets from a lower level in the network stack.
Static Flows for IGMP Messages: On start-up, the IGMP Proxy software configures static ASM streams between the source / destination servers and the IGMP proxy server, so that IGMP messages can be sent to and received by the local IGMP control in the IGMP proxy software.
The IGMP proxy server is configured as a gateway for the IGMP local control multicast group address (224.0.0.22). The SDN controller would automatically route any IGMP Membership Messages to the proxy server using Layer 2 switch functionality. However, the SDN Controller Layer 2 switch functionality would not send the Membership Query Response, sent to multicast group address (224.0.0.1), for all the source / destination servers.
Having pre-configured flows for all IGMP messages reduces the IGMP packet delay, as no packets are sent via the controller and reduces the response time for creating new SSM stream flows. The bit rate for IGMP ASM streams is also very low and has no impact on the videos streams.
If the IGMP message flows were dynamically adjusted each time a destination joins or leaves an SSM video stream, the IGMP proxy software would have to issues REST request for both the IGMP and the SSM video streams. This would double the number of REST commands, increasing the response time of a new stream join.
Low Priority Flows for SSM Stream Dropping: the IGMP proxy software configures a default low priority 232.0.0.0 /8 drop action flow rule for all SDN switches. All SSM streams, which have to use this multicast group IP range, are dropped in the source switch. No SSM packets leave the source switch, to either the SDN controller or the mesh network.
When a destination joins the multicast group, new flow table rules are added at a higher priority. The active stream is directed by higher priority rules, while nonactive streams are still dropped.
Setting Stream Weight and Priority: the bit rate and priority values for a stream could be set by 1 of 3 different ways: 1. From the source application, through a REST or other interface on the IGMP proxy software. 2. Different multicast group ranges have different weights and priorities, e.g. 232.X.0.Y, where X represents the bit rate weight and Y represents the stream priority. 3. The SDN controller could measure the stream bit rate or switch port bit rate to calculate a stream weight value. The priority could then be determined by another mechanism, such through an API, or by source, group or destination IP address.
The IGMP proxy software currently employs a fixed weight value, with the priority determined by the multicast group IP address.
The igmp routines, interrogate network interfaces, capture packets, send packets, post and get data through the SDN controller REST interface, create IGMP packets, manage streams and manage flow tables. The different routines types are shown in Figure 10
Routines that print information and other minor routines are not included in the lists. Nodes and link information is retrieved from SDN controller, but not parsed. Node, link, capacity and priority information is currently hard coded in the software.
Figure 11 shows a flow diagram of the proxy algorithm.
Aspects of the multicast implementation include are how the streams and flows are managed. These are discussed in relation to Figures 12 to 20.
The IGMP proxy software tracks streams as individual source, group and destination streams in a both a stream list and a flow table list.
The software tracks streams by listing all streams as individual source, group, destination, stream type and stream requirement in a stream list table. Each stream is allocated a stream number. The flow actions for each stream are listed in a flow table, consisting of the stream number, switch ports number, stream weight value, prioritisation value and a diversion field. Examples of the stream and flow tables, with a mesh of 4 switches, are shown in the tables below.
Table 1
15
Table 2
In the example tables, the IGMP streams, streams 0 to 9, are configured as ASM (Any Source Multicast), with a stream weight of 0 (very low bit rate) and a stream priority of 10 (not to be removed by priority rule).
Streams 10 to 13 are SD video streams, configured as SSM (Source Specific Multicast), all with stream weights of 1, but with different priority values. Streams 10 and 11 are 2 branches of the same stream, going to different destinations. The switch section of the Flow Table shows the path of the stream through the SDN mesh. The value in each switch column is the output port used for the flow action in the switch. The value 0 indicates the switch is not used, e.g.
For stream 2: IGMP ASM flow from source 10.0.0.110, group 224.0.0.1 to destination 10.0.0.130.
Source 10.0.0.110 Switch 2 [Output Port 25] Switch 3 [Output Port 11] Dest. 10.0.0.130.
For stream 10: SD Video SSM flow from source 10.0.0.100, group 232.1.0.1 to dest. 10.0.0.140.
Source 10.0.0.100 Switch 1 [Output Port 25] Switch 4 [Output Port 11] Dest. 10.0.0.140.
For stream 11: SD Video SSM flow from source 10.0.0.100, group 232.1.0.1 to dest. 10.0.0.150.
Source 10.0.0.100 Switch 1 [Output Port 25] Switch 4 [Output Port 13] -> Dest. 10.0.0.150
The multicast algorithm only allows unique flows on any switch link. If individual stream listings with the same source and group values share the same switch and output port, then only 1 flow is programmed for that link.
In the example above and in Figure 12, streams 10 and 11 are branches of the same stream.
Considered as 2 separate flows, stream 10 and stream 11 both go through Switch 1 on port 25. Applying the unique flow per link algorithm, the match action rule programmed in Switch 1 is “match source = 10.0.0.100, group = 232.1.0.1, action = output port 25.”
In Switch 4, the stream go to different ports, so the match action rule programmed in Switch 4 is “match source = 10.0.0.100, group = 232.1.0.1, action = output port 11, output port 13.”
In the tables above, stream 12 comes from the same source, but is a different video stream to streams 10 and 11, as shown in Figure 13. e.g.
For stream 12: SD Video SSM flow from source 10.0.0.100, group 232.1.0.2 to dest. 10.0.0.140.
Source 10.0.0.100 Switch 1 [Output Port 25] Switch 4 [Output Port 11] Dest. 10.0.0.140.
Streams 10 and 11 are multicast group 232.1.0.1 and stream 12 is multicast group 232.1.0.2. Under the unique stream per multicast algorithm, stream 12 is a unique flow over the same link as streams 10 and 11, so has it has its own match action rules, e.g.
The match action rules programmed in Switch 1 are: “match source = 10.0.0.100, group = 232.1.0.1, action = output port 25.”- For streams 10 and 11. “match source = 10.0.0.100, group = 232.1.0.2, action = output port 25.” - For stream 12.
The match action rules programmed in Switch 4 are: “match source = 10.0.0.100, group = 232.1.0.1, action = output ports 11, 13” -For streams 10 & 11. “match source = 10.0.0.100, group = 232.1.0.2, action = output port 11” - For stream 12.
As destinations join and leave flows, ports are added to and removed from the flow actions, based on the unique flow per link algorithm. If the destination leaves stream 10, if the ports are unique for the stream, the ports are removed, e.g.
The match action rules programmed in Switch 1 are: “match source = 10.0.0.100, group = 232.1.0.1, action = output port 25.”- For stream 11. “match source = 10.0.0.100, group = 232.1.0.2, action = output port 25.” - For stream 12.
The match action rules programmed in Switch 4 are: “match source = 10.0.0.100, group = 232.1.0.1, action = output ports 13” - For streams 11. “match source = 10.0.0.100, group = 232.1.0.2, action = output port 11”- For stream 12.
Stream 10 is deleted in both the stream list and flow table. The lower flows are then shifted up, so stream 11 becomes stream 10 and stream 12 becomes stream 11.
If a capacity rules are applied, then stream are load balanced using alternate paths through the SDN mesh. In the tables below and Figure 14, a capacity rule has been applied allowing only 1 stream across each switch to switch link, but any number of streams on the switch to server links.
Table 3
Table 4
Stream 10 takes a direct path from switch 1 to switch 4, while streams 11 and 12 take alternate paths via switch 2 and switch 3. The switches used for the alternate paths are shown in the ‘Stream Diversion’ column in the flow table, as shown in Table 4.
The alternate paths are calculated based on available capacity on the alternate links.
All 3 paths between switch 1 and switch 4 are now in use. If a 4th video is streamed from the 10.0.0.100 server and the 10.0.0.140 server joins the stream, then priority rules will be applied.
In Table 4, stream 10, 11 and 12 have priorities of 7, 8 and 6 respectively. If the 4th stream has a higher priority, an existing stream will be removed. The 4th stream (multicast group 232.1.0.4) has a priority of 9. Since stream 12 (multicast group 232.1.0.1) has a priority of 6, then stream 12 will be deleted and the multicast group 232.1.0.4 stream will become the new stream 12. This can be seen in Table 5, Table 6 and Figure 15.
Table 5
Table 6
The stream tracking algorithm adds streams if the IGMP message is INCLUDE / INCLUDE ALL / ALLOW. If the IGMP message is EXCLUDE / EXCLUDE ALL / BLOCK, the stream is removed. The stream is added or removed from the stream list after the addition or removal of any flows.
With Priority Rules, the stream is only added if capacity is available or it has a higher priority than any existing streams. If there are existing low priority streams, the lowest priority stream is removed and deleted from the stream table. The new stream is then added. A flow diagram of the stream tracking algorithm with priority control is shown in Figure 16.
The add stream flow algorithm adds streams as individual source, group, destination flows in a flow table consisting of switches and port numbers. For a new stream entry, the switch and port numbers are compared with the switch and port number for all other streams with the same source and group. For each switch in the flow, only unique values are programmed.
New output port actions are only added if there is a new port value. Capacity and priority are then checked so that flows can either be diverted via an intermediate switch or Priority Rule value is returned to the Tracking Stream algorithm. A flow diagram of the add stream flow algorithm is shown in Figure 17. In the flow diagram, the algorithm reads switch and connection information for the source and destination servers. This source and destination switches and ports are then compared with flow table entries for the same sources and groups.
If new flows (output port actions) are required, the link usage value for source to destination switch path is increased by the stream weight (a value that represents the bit rate of the new stream).
If capacity is available, flows are posted, adding or modifying the flow tables in the to the source and destination switches. If the source and destination servers are on the same switch (Switch.Src = Switch.Dst) only the destination switch to destination server flow is programmed.
If the source and destination switches are different (Switch.Src != Switch.Dst), then separate source switch to destination switch and destination switch to destination server flows are calculated. Prior to programming the flows, the link usage for switch to switch connections is checked for capacity, as shown in Figure 18. If no diversion is required flows to the source and destination switches are programmed.
If a diverted switch (Switch.Div) is calculated, previous link use values are subtracted and the stream weight for new ports is added for source switch to diversion switch and diversion switch to destination switch. If the link use still exceeds capacity, then the algorithm checks the priority of the new and existing flows, using the algorithm shown in Figure 19.
Depending on the stream priority, either the new stream or a pre-existing stream is removed. If a pre-existing stream is removed, the new stream is then added by the stream tracking routine. In the current configuration, the least congested path is selected before checking prioritisation. With multiple stream rates, the least congested path would require fewer streams to be removed. The lowest priority stream on the least congested path is then removed. This not necessarily the lowest priority stream on all paths. The streams with the highest priority, 10, cannot be removed. A new priority 10 stream cannot be added, if it requires the removal of an existing priority 10. Capacity and priority and behaviour could be modified for priority checking first.
The remove stream flow algorithm remove individual source, group, destination stream flows in flows in a flow table consisting of switches and port numbers. If the flow to be removed has unique output port actions that are not part of other streams with the same source and group, the flow is modified or deleted. A flow diagram of the add stream flow algorithm is shown in Figure 20.
In the implementation, the flow table contains the array of switch and link ports between switches and ports to and from servers. The stream weight, priority and diverted switches are also contained in the flow table.
In the current implementation, a Link Use table tracks utilisation for switch to switch links. If source and destination are on the same switch, the utilisation is also tracked. The source to switch and switch to destination utilisation is currently not tracked.
In an integrated system, the production applications should be aware of the streams sent and received on their local network ports. Also, when a destination joins a source:group stream, there are no added streams on the source to switch connection. If needed, source to switch and switch to destination utilisation could be added using the same stream weight mechanism.

Claims (19)

1. A method for controlling multi-cast audio-video streams in a network of the type comprising switches controllable from a controller, comprising: a. receiving a request for a stream from a destination, the request including a source IP address, group IP address and destination IP address; b. determining one or paths from the source to the destination; c. analyse existing streams to determine whether a stream having the same source IP address and multi-cast group IP address already exists on any of the links of the determined one or more paths; d. programming rules into the switches in the one or more paths such that for each switch only one combination of a given source IP address and multi-cast group IP address exists on each output port of the switch.
2. A method according to claim 1, wherein the receiving, determining and analysing are performed in a software routine.
3. A method according to claim 2, wherein the software routine is implemented in a proxy server attached to the network.
4. A method according to claim 2, wherein the software routine is implemented in a network controller of the network.
5. A method according to any preceding claim, wherein determining the one or more paths includes determining a priority of the requested stream and the programming rules includes giving priority to higher priority streams.
6. A method according to claim 5, wherein the programming to give priority includes programming the dropping a stream of lower priority if a stream of higher priority requires a path.
7. A method according to any preceding claim, wherein analysing streams comprises maintaining a table having source IP, group IP and destination IP address.
8. A method according to claim 7, further comprising maintaining a table of streams, switches and port numbers and wherein the analysing comprises a lookup in the table to ensure uniqueness of flows per link.
9. A method according to any preceding claim, further comprising tracking capacity on each link of the network and including the capacity in determining the one or more paths.
10. A method according to claim 9, wherein tracking capacity includes one or more of receiving information from the source, measuring bit rate on links and choosing the paths according to rules including: a. choosing the most direct path first; and b. if required capacity is above threshold choosing a different path.
11. A method according to claim 10, further comprising applying a priority rule if no capacity rule is present or violated to prevent joining or configure dropping an existing lower priority stream.
12. A method according to any preceding claim, further comprising receiving an instruction to no longer receive a stream and analysing existing streams to programme switches so that a stream remains if unique port number is used by remaining stream.
13. A method according to any preceding claim, further comprising scheduling requests and, if a request is received for scheduled stream, allocating the scheduled path to the new request.
14. A method according to claim 13, wherein multiple paths are scheduled for resilience.
15. A method according to any preceding claim, wherein the source is one of many resources of a given physical source that can provide multiple multicast group IP addresses.
16. A method according to any preceding claim, wherein one physical destination can receive multiple multi-cast group IP sources.
17. A network device arranged to implement the method of any preceding claim.
18. A proxy server having software thereon which when executed undertakes the method of any preceding claim.
19. A computer programme which when executed undertakes the method of any preceding claim.
GB1517118.4A 2015-09-28 2015-09-28 Multicast network system and method Active GB2542632B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1517118.4A GB2542632B (en) 2015-09-28 2015-09-28 Multicast network system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1517118.4A GB2542632B (en) 2015-09-28 2015-09-28 Multicast network system and method

Publications (3)

Publication Number Publication Date
GB201517118D0 GB201517118D0 (en) 2015-11-11
GB2542632A true GB2542632A (en) 2017-03-29
GB2542632B GB2542632B (en) 2021-07-14

Family

ID=54544215

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1517118.4A Active GB2542632B (en) 2015-09-28 2015-09-28 Multicast network system and method

Country Status (1)

Country Link
GB (1) GB2542632B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107135169A (en) * 2017-04-25 2017-09-05 中国传媒大学 A kind of net switching method of video based on SDN switch
CN108156424A (en) * 2017-12-27 2018-06-12 浙江宇视科技有限公司 Multicast group port management method, device and video management server

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150023347A1 (en) * 2013-07-19 2015-01-22 International Business Machines Corporation Management of a multicast system in a software-defined network

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150023347A1 (en) * 2013-07-19 2015-01-22 International Business Machines Corporation Management of a multicast system in a software-defined network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"CastFlow: Clean-Slate Multicast Approach using In-Advance Path Processing in Programmable Networks", Marcondes et al, 2012 IEEE Symposium on Computers and Communications (ISCC), IEEE, 1-4 July 2012, pages 94-101. *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107135169A (en) * 2017-04-25 2017-09-05 中国传媒大学 A kind of net switching method of video based on SDN switch
CN108156424A (en) * 2017-12-27 2018-06-12 浙江宇视科技有限公司 Multicast group port management method, device and video management server
CN108156424B (en) * 2017-12-27 2020-01-14 浙江宇视科技有限公司 Multicast group port management method and device and video management server

Also Published As

Publication number Publication date
GB2542632B (en) 2021-07-14
GB201517118D0 (en) 2015-11-11

Similar Documents

Publication Publication Date Title
Fernandez Comparing openflow controller paradigms scalability: Reactive and proactive
US9338096B2 (en) Multicast tree packing for multi-party video conferencing under SDN environment
US9172550B2 (en) Management of a multicast system in a software-defined network
CN108882008B (en) A kind of method and apparatus of data conversion
AlSaeed et al. Multicasting in software defined networks: A comprehensive survey
EP3176987B1 (en) Communication control device, communication control method and communication system
CN109168050B (en) SDN-based video multicast method
EP2686982B1 (en) Quantifying available service-level capacity of a network for projected network traffic
US20160323116A1 (en) Congestion management in a multicast communicaton network
US11750440B2 (en) Fast forwarding re-convergence of switch fabric multi-destination packets triggered by link failures
US9130857B2 (en) Protocol independent multicast with quality of service support
Civanlar et al. Distributed management of service-enabled flow-paths across multiple SDN domains
US20040165597A1 (en) Service level agreement driven route table selection
US9030926B2 (en) Protocol independent multicast last hop router discovery
US11695686B2 (en) Source-initiated distribution of spine node identifiers of preferred spine nodes for use in multicast path selection
GB2542632A (en) Multicast network system and method
Chen et al. Scalable and flexible traffic steering for service function chains
WO2017082773A1 (en) Method for grouped transmission of packets over software-defined networks
CN108206756B (en) A kind of method and apparatus of view networking data verification
US9036462B2 (en) Internet group management protocol version three for quality of service support
Hsu et al. The implementation of a QoS/QoE mapping and adjusting application in software-defined networks
Phemius et al. Implementing OpenFlow-based resilient network services
US11799676B2 (en) Deterministic assignment of overlay multicast traffic
US10764337B2 (en) Communication system and communication method
Ren et al. A reactive traffic flow estimation in software defined networks