EP2547047A1 - Centralized system for routing ethernet packets over an internet protocol network - Google Patents
Centralized system for routing ethernet packets over an internet protocol network Download PDFInfo
- Publication number
- EP2547047A1 EP2547047A1 EP11005588A EP11005588A EP2547047A1 EP 2547047 A1 EP2547047 A1 EP 2547047A1 EP 11005588 A EP11005588 A EP 11005588A EP 11005588 A EP11005588 A EP 11005588A EP 2547047 A1 EP2547047 A1 EP 2547047A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- ethernet
- customer edge
- network
- lan2
- lan3
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/60—Software-defined switches
- H04L49/604—Hybrid IP/Ethernet switches
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/46—Interconnection of networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/46—Interconnection of networks
- H04L12/4633—Interconnection of networks using encapsulation techniques, e.g. tunneling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/42—Centralised routing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L61/00—Network arrangements, protocols or services for addressing or naming
- H04L61/09—Mapping addresses
- H04L61/10—Mapping addresses of different types
- H04L61/103—Mapping addresses of different types across network layers, e.g. resolution of network layer into physical layer addresses or address resolution protocol [ARP]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L61/00—Network arrangements, protocols or services for addressing or naming
- H04L61/59—Network arrangements, protocols or services for addressing or naming using proxies for addressing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/02—Network architectures or network communication protocols for network security for separating internal from external traffic, e.g. firewalls
- H04L63/0227—Filtering policies
Definitions
- LAN Local Area Network
- IP Internet Protocol
- Cloud computing services are typically hosted in data centers that are internally realized by large Ethernet networks. There is a certain trend to decentralize these data centers, i.e. to host services in a larger number of smaller, geographically distributed data centers.
- MPLS Multi Protocol Label Switching
- Each data center site LAN 1, LAN2, LAN3 is connected to the interconnecting network N by a customer edge device CE.
- Each data center LAN1, LAN2, LAN3 comprises server farms 30 which are connected via switches SW to the customer edge device CE of the respective data center site LAN1, LAN2, LAN3.
- the interconnecting network N which may be a transport network based on IP/MPLS, comprises three interconnected provider edges PE, one for each customer edge device CE.
- the connection of a customer edge device CE with its associated provider edge PE may be via a user network interface UNI.
- a connection of a first provider edge PE and a second provider edge PE may be via a network-to-network interface NNI.
- Fig. 1 assumes that only one Ethernet LAN is attached to the CE.
- several Ethernet LANs can be attached to a customer edge device, e. g., using different Ethernet interfaces to the CE.
- Ethernet networks LAN1, LAN2, LAN3 can interconnect the Ethernet networks LAN1, LAN2, LAN3 over layer 1, layer 2, or layer 3 links. Their common objective is to transparently interconnect all Ethernet networks LAN1, LAN2, LAN3.
- the customer edge devices transport, i.e. tunnel, the Ethernet traffic over the WAN in a multi-point to multi-point way.
- tunneling Ethernet or IP transparently over the WAN, the WAN is invisible for the nodes in each data center. From the perspective of the data center, the customer edge device is similar to a standard Ethernet switch/bridge, obviously apart from the larger delay in the WAN.
- An object of the present invention is achieved by a method of transmitting Ethernet packets between two or more Ethernet LANs through an interconnecting IP network, each of the Ethernet LANs being connected to the interconnecting IP network by means of one or more respective customer edge devices, wherein an exchange between the customer edge devices of control information associated with the Ethernet packet transmission is processed and controlled by a centralised server connected to each of the customer edge devices via a control connection.
- a further object of the present invention is achieved by a centralised server of an overlay network with two or more Ethernet LANs and an interconnecting IP network, the centralised server comprising two or more interfaces for connecting the centralised server via control connections to respective customer edge devices, each of the customer edge devices connecting one or more associated Ethernet LANs to the interconnecting IP network, whereby the centralised server is adapted to process and control a control information exchange between the customer edge devices, the exchanged control information being associated with a transmission of Ethernet packets between two or more of the two or more Ethernet LANs through the interconnecting IP network.
- a further object of the present invention is achieved by a customer edge device associated with one or more Ethernet LANs, the customer edge device comprising at least one Ethernet interface to the Ethernet LAN, at least one data traffic interface to an interconnecting IP network interconnecting the Ethernet LAN with at least one further Ethernet LAN for a transmission of Ethernet packets between the Ethernet LAN and the at least one further Ethernet LAN via the interconnecting IP network, and a control information interface to a centralised server for exchange of control information associated with the Ethernet packet transmission via a control connection wherein the control information exchanged between the customer edge device and respective customer edge devices of the at least one further Ethernet LAN is sent to and received from the centralised server through the control information interface.
- the two or more Ethernet LANs and the interconnecting IP network form an overlay network.
- the invention realises an overlay system that transparently interconnects Ethernet networks over an IP network, i.e. an Ethernet-over-IP solution that is optimized for data centers.
- data center data center site
- site site
- the invention provides a simple and scalable solution that neither requires static IP tunnels nor explicit path management, e.g. MPLS label switched paths.
- the invention provides a centralised server, i.e. a single point to which the Ethernet-over-IP system can peer. Therefore, unlike in known approaches which use a distributed control plane, embodiments of the invention make it possible to apply global policies and to link the data center interconnect solution with control and management systems, either a network management, or a cloud computing management, e.g. a cloud orchestration layer.
- a centralized server is supported by research results that show that commercial-of-the-shelf personal computer technology is able to process of the order of 100,000 signalling messages per second between a centralized controller and several network devices, over TCP connections.
- OpenFlow technology which also use one centralized server, which is called controller.
- controller which also use one centralized server, which is called controller.
- the expected order of magnitude of control traffic in the proposed system is much smaller, i.e., a centralized server is sufficiently scalable.
- the centralised server is logically a centralized entity, but may of course be realized in a distributed way, e.g., to improve the resilience. Distributed realisations of the centralised server may also use load balancing.
- the setup of a full mesh of MPLS paths is complex and limits the dynamics of the data center interconnection solution. Tunneling of MPLS over IP would result in additional overhead.
- the invention provides an improved solution which avoids the aforementioned disadvantages.
- the invention proposes a new technology to interconnect Ethernet networks over an IP network, using a centralized server in combination with overlay network mechanisms.
- the invention neither requires a complex setup of tunnels nor specific support by an interconnecting network.
- the invention makes it possible to interconnect data center Ethernet networks over any IP network, even without involvement of the network provider. Also, the use of a centralized server with a potentially global view on the Ethernet network simplifies the enforcement of policies and intelligent traffic distribution mechanisms.
- the invention does not use IP multicast or extended routing protocols, but a centralized server instead, which is simpler and enables centralized control and management. Most notably, the invention does not use extensions of the IS-IS routing protocol, operates on a per-destination-address basis, not on a per-flow basis, provides additional overlay topology management functions, and scales to large networks.
- the invention relies on a centralized server instead of proprietary routing protocol extensions.
- a centralized server is simpler to implement, deploy, and operate than an overlay that requires several IP multicast groups. It can also very easily be coupled with other control and management systems, e. g., for the dynamic configuration of policies.
- the invention is much simpler to configure and implement, as the edge devices only require a minimum initial configuration and only maintain soft state for the traffic in the overlay.
- Ethernet interconnectivity can be offered even for a large number of highly distributed data center sites that are turned on and off frequently.
- control information is related to one or more of: mapping of Ethernet addresses of network devices of Ethernet LANs to IP addresses of customer edge devices, information concerning a scope of Ethernet LANs and/or VLAN tags, Address Resolution Protocol (ARP) information, membership information of multicast groups inside the Ethernet LANs, filtering policies, firewall rules, overlay topology, information about path characteristics between customer edge devices, bootstrapping and configuration information for devices joining an overlay network comprising the two or more Ethernet LANs.
- ARP Address Resolution Protocol
- the inventive method uses a centralized server.
- TCP Transmission Control Protocol
- the customer edge devices report information to the centralised server, which distributes the information then to the other customer edge devices, and preferably also maintains a global view of the whole data center network and the attachment of Ethernet devices in the different Ethernet segments.
- TLS Transport Layer Security
- the method further comprises the steps of reporting, by one or more of the customer edge devices, control information to the centralised server; managing, by the centralised server, the received control information and distributing processed control information to one or more of the customer edge devices including a first customer edge device associated with a first Ethernet LAN of the two or more Ethernet LANs; and using, by the first customer edge device, the received control information for controlling a transmission of Ethernet data traffic from a first network device of the first Ethernet LAN through the interconnecting IP network to a second network device of a second Ethernet LAN of the two or more Ethernet LANs.
- the method further comprises the steps of sending, by a first network device of a first Ethernet LAN of the two or more Ethernet LANs, an Ethernet packet destined for an Ethernet address of a second network device of a second Ethernet LAN of the two or more Ethernet LANs; receiving, by a first customer edge device associated with the first Ethernet LAN, the Ethernet packet and checking if a forwarding table managed by the first customer edge device contains a mapping of the Ethernet address of the second network device to an IP address of a customer edge device associated with the second Ethernet LAN; if the forwarding table does not contain the said mapping, sending by the first customer edge device an address resolution request to the centralised server and receiving from the centralised server in response to the address resolution request a reply message specifying the said mapping; encapsulating, by the first customer edge device, the Ethernet packet with an encapsulation header inside an IP packet comprising a destination address of the second customer edge device according to the mapping; sending the encapsulated Ethernet packet via the interconnecting IP network to the second customer edge
- the encapsulation header at least comprises an IP header.
- further shim layers may be used for encapsulation, most notably the User Datagram protocol (UDP) or the Generic Routing Encapsulation (GRE), or both.
- UDP User Datagram protocol
- GRE Generic Routing Encapsulation
- IP packets e.g. UDP packets
- UDP User Datagram Protocol
- the IP addresses of the destination customer edge device are learned from the centralised server if they are not already locally known. Ethernet packets are then transported over the IP network to the destination customer edge devices, decapsulated there, and finally delivered to the destination Ethernet device inside the destination data center LAN.
- a UDP encapsulation of data plane packets and a TCP-based control connection to the centralised server works in environments where other protocols, such as IP multicast or routing protocols, are blocked.
- Other benefits of the invented architecture include:
- the method further comprises the steps of announcing, by the centralised server, the lookup reply which is sent by the centralised server to the first customer edge device also to the other customer edge devices for their learning of addresses from the centralised server, i.e. so that they learn the addresses from the centralised server and can store them in an ARP table or in the forwarding table in the customer edge device, similar to an ARP proxy.
- the method further comprises the steps of measuring, by at least one of the customer edge devices, path characteristics and sending the measured path characteristics to the centralised server; establishing, by the centralised server, topology characteristics regarding the communication between the two or more Ethernet LANs on the basis of the received path characteristics; announcing, by the centralised server, the established topology characteristics to the customer edge devices; and making use of this information in routing decisions by at least one of the customer edge devices.
- the method further comprises the steps of routing, on account of announced topology characteristics, an ongoing communication between a first and a second Ethernet LAN of the at least three Ethernet LANs via a third customer edge device of a third Ethernet LAN of the at least three Ethernet LANs.
- customer edge devices can also use more sophisticated forwarding and traffic engineering mechanisms. Specifically, embodiments of the invention allow a multi-hop forwarding in the overlay to move traffic away from congested links between two data center sites. In practice, two hops will be sufficient in most cases.
- the invention does not use IP multicast. Instead any multicast or broadcast traffic is duplicated in the customer edge devices and forwarded point-to-point in UDP datagrams to each customer edge device. This design, which is similar to the handling of such packets in VPLS, avoids problems in networks not supporting IP multicast.
- the use of multi-hop forwarding allows bypassing a potentially congested link between two data center sites, if there is an alternative path.
- the global view of the network at the centralised server, as well as the distribution of path characteristic measurements to the customer edge devices enable a better load balancing and intelligent routing, also if sites are multi-homed. If there is an alternative uncongested path in the overlay, as shown in Figure 6 below, the invention achieves a significantly larger throughput between data center sites compared to a solution that only uses point-to-point forwarding between the customer edge devices.
- the centralised server further comprises a data base containing at least one mapping of an Ethernet address of a network device of one of the Ethernet LANs to an IP address of a customer edge device of the respective Ethernet LAN with which the network device is associated.
- the database of the centralised server further contains at least one address mapping of an Ethernet address of a network device of one of the Ethernet LANs to its corresponding IP address, so that the centralized server can answer Ethernet address lookup queries without Address Resolution Protocol broadcasts.
- the centralised server further comprises an interface to a network or cloud computing management system that provides for instance policies or monitors the overlay.
- the customer edge device further comprises a forwarding table containing at least one mapping of an Ethernet address of a network device of one of the at least one further Ethernet LAN to an IP address of the respective customer edge device of the at least one further Ethernet LAN with which the network device is associated.
- the customer edge device further comprises a path metering unit adapted to measure path characteristics and that the customer edge device is adapted to send the measured path characteristics to the centralised server.
- the customer edge device further comprises an address resolution proxy adapted to analyze an Address Resolution Request (ARP) sent by a network device of the Ethernet LAN in order to receive information related to the address mapping of IP and Ethernet addresses of a destination network device addressed in the ARP request. If the address mapping is not known yet by the customer edge device, the request is blocked and a corresponding lookup request is sent to the centralised server over the control connection. If the address mapping is already known from the ARP table in the customer edge device, a corresponding ARP reply is sent back to the network device. In both cases, the transport of the ARP messages over the overlay can be avoided.
- ARP Address Resolution Request
- the address resolution proxy learns address mappings of the IP and Ethernet addresses of the destination network device from the centralised server and directly replies to the intercepted Address Resolution Protocol request from the network device if the address mapping is already known.
- the address resolution proxy may also learn address mappings by other means, for instance by monitoring of ongoing traffic or additional ARP lookups.
- Fig. 2 shows an overlay network according to an embodiment of the present invention.
- the overlay network comprises three Ethernet LANs, LAN1, LAN2, LAN3, and an interconnecting network N.
- One or more of the Ethernet LANs may be connected to the interconnecting network N by a respective customer edge device, e.g., CE1, CE2, CE3.
- Each Ethernet LAN LAN1, LAN2, LAN3 comprises server farms 30 which are connected via Ethernet switches SW to the customer edge device CE1, CE2, CE3 of the respective Ethernet LAN LAN1, LAN2, LAN3.
- the interconnecting network N may be an IP network such as the Internet.
- the customer edge devices CE1, CE2, CE3 are interconnected via network links 22 for the transmission of data traffic packets.
- An Ethernet packet originating from a first Ethernet LAN LAN1 is transmitted via the network links 22 through the interconnecting network N to a second Ethernet LAN LAN2 in the form of an Ethernet-over-IP encapsulation 23, as is explained in more detail in connection with Fig. 3 .
- a key component of the overlay network is a centralized server 10 that handles the exchange of control plane messages associated with a transmission of Ethernet packets between Ethernet LAN through the interconnecting network in an Ethernet-over-IP transmission mode. Therefore, unlike in prior art, no modifications of routing protocols etc. are required.
- the invention only requires some additional functionality in the customer edge devices CE1, CE2, CE3, as detailed below.
- the centralised server 10 can either be a stand-alone device, e.g. a high-performance personal computer, or it can be integrated in one of the customer edge devices, as indicated by the dotted outline of a box in Fig. 2 , in which case the centralised server 10 is a kind of master device for the overlay. Both alternative realizations can provide the same service.
- TCP Transmission Control Protocol
- TLS Transport Layer Security
- Fig. 3 illustrates, in the overlay network of Fig. 2 , the process of tunneling of an Ethernet packet between Ethernet LANs over IP, i.e. a data plane operation.
- a first network device A of a first Ethernet LAN LAN1 of the three Ethernet LANs LAN1, LAN2, LAN3 sends an Ethernet packet 20.
- the Ethernet packet 20 contains as destination address an Ethernet address of a second network device B of a second Ethernet LAN LAN2 of the two or more Ethernet LANs LAN1, LAN2, LAN3, as source address the Ethernet address of the first network device A, and a payload.
- the customer edge device CE1 associated with the first Ethernet LAN LAN1 receives the Ethernet packet 20 and determines from a forwarding table 31 managed by the first customer edge device CE1 a mapping of the Ethernet address of the second network device B to an IP address of a customer edge device CE2 associated with the second Ethernet LAN LAN2.
- the first customer edge device CE1 encapsulates the Ethernet packet 20 with an IP header 24 comprising an IP address of the source customer edge device CE1, an IP address of the destination customer edge device CE2, and further header fields according to the chosen encapsulation protocol.
- the source customer edge device CE1 sends the encapsulated Ethernet packet 28 with the encapsulation header 24 via a network link 22 through the interconnecting IP network N to the destination customer edge device CE2.
- the second customer edge device CE2 decapsulates the received Ethernet packet 20 for delivery within the second Ethernet LAN LAN2 to the second network device B. As a result, an end-to-end transfer 27 between the hosts A and B in the Ethernet LANs is achieved.
- Ethernet packets are encapsulated into an IP encapsulation packet, e.g. an UDP packet, using an additional header, and then sent via IP to the IP address of the customer edge device at the destination Ethernet LAN.
- IP encapsulation packet e.g. an UDP packet
- This data plane operation is similar to other tunnel solutions.
- Fig. 4 illustrates, in the overlay network of Fig. 2 , an Ethernet address resolution over a centralised server 10, i.e. a control plane function.
- a new data connection 40 is to be established from a first network device A of a first Ethernet LAN LAN1 of the two or more Ethernet LANs LAN 1, LAN2, LAN3 to a second network device B of a second Ethernet LAN LAN2 of the two or more Ethernet LANs LAN1, LAN2, LAN3.
- a first customer edge device CE1 associated with the first Ethernet LAN LAN1 blocks an address resolution request 41 sent by the first network device A and sends a corresponding lookup request 42 from the first customer edge device CE1 to the centralised server 10, assuming that the address mapping is not already locally known in CE1.
- the centralised server 10 After receipt of the lookup request 42, the centralised server 10 forwards 43 the lookup request to all other customer edge devices CE2, CE3 except the source customer edge device, i.e. the first customer edge device CE1. Not shown in Figure 2 is that as alternative the server 10 could also directly respond to the lookup request, if the address mapping is already known in its ARP table.
- the other customer edge devices CE2, CE3 After receipt of the forwarded lookup request 43, the other customer edge devices CE2, CE3 distribute the lookup request 44 as an ARP lookup among the network devices of the respective Ethernet LANs LAN2, LAN3.
- the other customer edge device CE2 associated with the Ethernet LAN LAN2 wherein the destination network device B is located receives the corresponding lookup reply from the destination network device B and forwards the lookup reply 46 to the centralised server 10.
- the centralised server 10 manages and processes the received lookup reply 46 and sends a lookup reply 47 to the first customer edge device CE1 which had initiated the lookup request 42.
- the first customer edge device CE1 sends the lookup reply 49 to the first network device A which had initiated the address resolution request 40.
- the centralised server 10 announces 48 the lookup reply 47 which is sent by the centralised server 10 to the first customer edge device CE1 also to the third customer edge device CE3 for its learning of addresses from the centralised server 10.
- the other customer edge devices can in future answer address lookup queries and encapsulate and forward packets to those destinations without interacting with the server.
- a customer edge device CE1, CE2, CE3 only forwards an Ethernet packet to the overlay if the destination address is known.
- the customer edge devices CE1, CE2, CE3 learn addresses from the centralized server 10.
- the learning from the centralized server 10 is one of the key differentiators compared to prior art systems.
- the invention does not need established multicast trees or routing protocol extensions.
- the address learning is handled as follows:
- Fig. 5 illustrates, in the overlay network of Fig. 2 , performance and overlay measurement, collection of measurement data, announcement of path characteristics and distribution of overlay topology information.
- a first data connection 50AB is established from a first network device A of a first Ethernet LAN LAN1 of the two or more Ethernet LANs LAN 1, LAN2, LAN3 to a second network device B of a second Ethernet LAN LAN2 of the two or more Ethernet LANs LAN 1, LAN2, LAN3.
- a second data connection 50AC is established from the first network device A to a third network device C of the second Ethernet LAN LAN2.
- a third data connection 50AD is established from the first network device A to a fourth network device D of a third Ethernet LAN LAN3 of the two or more Ethernet LANs LAN1, LAN2, LAN3.
- Path metering units 26 of the customer edge devices CE1, CE2, CE3 measure 51 path characteristics of the data transmission paths 50AB, 50AC, 50AD from all known other customer edge devices CE1, CE2, CE3, e.g. by measuring packet loss, optionally also packet delay, and send 52 the measured path characteristics to the centralised server 10, e.g. in the form of a path characteristics report.
- the centralised server 10 establishes topology characteristics regarding the data transmission, i.e. communication, between the three Ethernet LANs LAN1, LAN2, LAN3 on the basis of the received path characteristics.
- the centralised server 10 announces 53 the established topology characteristics to the customer edge devices CE1, CE2, CE3. At least one of the customer edge devices CE1, CE2, CE3 makes use of this information in subsequent routing decisions.
- the method uses the centralised server 10 to distribute delay and load information for all paths 50AB, 50AC, 50AD, in order to enable optimized overlay routing as described below.
- This measurement uses the following techniques:
- Fig. 6 illustrates, in the overlay network of Fig. 5 , a multi-hop routing in the overlay between different Ethernet LANs.
- two paths 60AB, 60AC, 60AD suffer from a congestion 61 in the interconnecting network N, namely a first path 60AB between the network device A in a first Ethernet LAN LAN1 and a second network device B of a second Ethernet LAN LAN2, and a second path 60AC between the network device A in the first Ethernet LAN LAN1 and a third network device C of the second Ethernet LAN LAN2. From path measurements, e.g.
- the customer edge device CE1 notices 62 a loss and/or delay of Ethernet packets transmitted on these congested paths 60AB, 60AC. Alternatively, the problem could also be noticed by CE2.
- the centralised server 10 Triggered by a corresponding control message reporting the congestion sent via the control connection from the customer edge device CE1 to the centralised server 10, the centralised server 10, based on its established topology characteristics of the overlay network, announces 63 that the third ongoing data transmission path 60AD between the network device A in a path between a third customer edge device CE3 of a third Ethernet LAN LAN3 and a second customer edge device CE2 of the second Ethernet LAN LAN2 is not congested.
- the first customer edge device CE1 of the first Ethernet LAN LAN1 sends 64 at least a part of the data traffic from the congested data transmission paths 60AB, 60AC, namely the data traffic from the congested data transmission path 60AB, to the third customer edge device CE3.
- the third customer edge device CE3 forwards 65 the packet to the destination address of the final destination to the second customer edge device CE2. This can be achieved by decapsulating the received Ethernet packets and encapsulating them again with the new destination address. This way the data traffic between the network devices A and B is re-routed 66 via the second customer edge device CE2.
- Embodiments of the invention achieve an overlay multi-hop routing.
- Such overlay routing is not considered by prior art data center interconnect solutions.
- Multi-hop routing in the overlay between the sites can work around congestion or suboptimal IP routing on the direct path, if there are more than two sites attached to the overlay. This preferably triangular re-routing can result in a larger delay, but still may be beneficial to improve the overall throughput. Yet, a fundamental challenge is loop prevention.
- the overlay routing in ECO is realized as follows:
- Fig. 7 illustrates an embodiment of a customer edge device CE.
- the customer edge device CE comprises a first interface 71 for a TCP connection to a centralised server, at least one second interface 72 to an Ethernet LAN, preferably in the form of a data center, and at least one third interface 73 to the interconnecting IP network, i.e. the overlay.
- the customer edge device CE comprises a protocol engine 74 for managing a protocol used for the control message exchange with a centralised server of the overlay network.
- the customer edge device CE comprises a forwarding table 31, an ARP proxy 25, a path meter unit 26, an Ethernet switching unit 78 and an encapsulation unit 79 that encapsulates the Ethernet packets in IP packets and that adds further shim protocols if required for the transport over the WAN.
- the forwarding table 31 comprises mappings between entries in a first section 311 with Ethernet addresses of destinations, in a second section 312 with local interfaces, and in a third section 313 with IP addresses of target customer edge devices.
- the forwarding table 31, the protocol engine 74, the ARP proxy 25, and the path meter unit 26 are located in a slow path part 701 of the customer edge device CE, whereas the Ethernet switching unit 78 and an encapsulation unit 79 are located in a fast path part 702 of the customer edge device CE.
- Fig. 7 illustrates the main and additional functional components of a customer edge device, which is typically a router but acts as an Ethernet switch/bridge towards the internal network interface or interfaces.
- a customer edge device which is typically a router but acts as an Ethernet switch/bridge towards the internal network interface or interfaces.
- the important functions are:
- Fig. 8 illustrates an embodiment of a centralised server 10.
- the centralised server 10 comprises at least a first interface 81 for a TCP connection to a first customer edge device CE1 and a second interface 82 for a TCP connection to a second customer edge device CE2.
- the centralised server 10 may further comprise a third interface 83 to a network management system or a cloud computing management system.
- the centralised server 10 further comprises a global policies and decision logic 84, a data base 85 mapping Ethernet addresses to IP addresses of customer edge devices CE1, CE2, a data base 86 containing overlay topology and path characteristics, a server function unit 87, and a first and a second protocol engine 88, 89 for managing a protocol used for the control message exchange with the customer edge devices CE1, CE2 of the overlay network.
- Fig. 8 shows the main and additional functions of the centralised server.
- the centralised server is on the one hand a centralized control and policy decision point, and, on the other hand, a mirroring server that distributes information from the individual customer edge devices in the overlay.
- the functions can be summarized as follows:
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer Hardware Design (AREA)
- Computer Security & Cryptography (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Description
- The present invention relates to a method of transmitting Ethernet packets between two or more Ethernet LANs through an interconnecting IP network, a centralised server and a customer edge device (LAN = Local Area Network; IP = Internet Protocol).
- Cloud computing services are typically hosted in data centers that are internally realized by large Ethernet networks. There is a certain trend to decentralize these data centers, i.e. to host services in a larger number of smaller, geographically distributed data centers.
-
Fig. 1 shows a typical scenario of a data center interconnect over a Wide Area Network (WAN) known in prior art, wherein the data centers LAN1, LAN2, LAN3 typically use a flat Ethernet network or an Ethernet/IP network, in combination with Virtual Local Area Network (= VLAN) and/or specific addressing schemes. Due to their different geographical location, the distributed data center sites LAN1, LAN2, LAN3 have to be interconnected by Wide Area Network technology, such as optical links, Multi Protocol Label Switching (= MPLS) paths, or networks providing connectivity at IP level. - Each data center site LAN 1, LAN2, LAN3 is connected to the interconnecting network N by a customer edge device CE. Each data center LAN1, LAN2, LAN3 comprises
server farms 30 which are connected via switches SW to the customer edge device CE of the respective data center site LAN1, LAN2, LAN3. The interconnecting network N, which may be a transport network based on IP/MPLS, comprises three interconnected provider edges PE, one for each customer edge device CE. The connection of a customer edge device CE with its associated provider edge PE may be via a user network interface UNI. A connection of a first provider edge PE and a second provider edge PE may be via a network-to-network interface NNI. For simplicity,Fig. 1 assumes that only one Ethernet LAN is attached to the CE. Alternatively, several Ethernet LANs can be attached to a customer edge device, e. g., using different Ethernet interfaces to the CE. - There are many technologies that can interconnect the Ethernet networks LAN1, LAN2, LAN3 over layer 1,
layer 2, orlayer 3 links. Their common objective is to transparently interconnect all Ethernet networks LAN1, LAN2, LAN3. The customer edge devices transport, i.e. tunnel, the Ethernet traffic over the WAN in a multi-point to multi-point way. By tunneling Ethernet or IP transparently over the WAN, the WAN is invisible for the nodes in each data center. From the perspective of the data center, the customer edge device is similar to a standard Ethernet switch/bridge, obviously apart from the larger delay in the WAN. - It is the object of the present invention to provide an improved solution for an interconnection of distributed Ethernet LANs over an IP network.
- An object of the present invention is achieved by a method of transmitting Ethernet packets between two or more Ethernet LANs through an interconnecting IP network, each of the Ethernet LANs being connected to the interconnecting IP network by means of one or more respective customer edge devices, wherein an exchange between the customer edge devices of control information associated with the Ethernet packet transmission is processed and controlled by a centralised server connected to each of the customer edge devices via a control connection. A further object of the present invention is achieved by a centralised server of an overlay network with two or more Ethernet LANs and an interconnecting IP network, the centralised server comprising two or more interfaces for connecting the centralised server via control connections to respective customer edge devices, each of the customer edge devices connecting one or more associated Ethernet LANs to the interconnecting IP network, whereby the centralised server is adapted to process and control a control information exchange between the customer edge devices, the exchanged control information being associated with a transmission of Ethernet packets between two or more of the two or more Ethernet LANs through the interconnecting IP network. And a further object of the present invention is achieved by a customer edge device associated with one or more Ethernet LANs, the customer edge device comprising at least one Ethernet interface to the Ethernet LAN, at least one data traffic interface to an interconnecting IP network interconnecting the Ethernet LAN with at least one further Ethernet LAN for a transmission of Ethernet packets between the Ethernet LAN and the at least one further Ethernet LAN via the interconnecting IP network, and a control information interface to a centralised server for exchange of control information associated with the Ethernet packet transmission via a control connection wherein the control information exchanged between the customer edge device and respective customer edge devices of the at least one further Ethernet LAN is sent to and received from the centralised server through the control information interface.
- The two or more Ethernet LANs and the interconnecting IP network form an overlay network. The invention realises an overlay system that transparently interconnects Ethernet networks over an IP network, i.e. an Ethernet-over-IP solution that is optimized for data centers. In this description the terms "data center", "data center site" and "site" are used synonymously with the term "Ethernet LAN".
- The invention provides a simple and scalable solution that neither requires static IP tunnels nor explicit path management, e.g. MPLS label switched paths.
- The invention provides a centralised server, i.e. a single point to which the Ethernet-over-IP system can peer. Therefore, unlike in known approaches which use a distributed control plane, embodiments of the invention make it possible to apply global policies and to link the data center interconnect solution with control and management systems, either a network management, or a cloud computing management, e.g. a cloud orchestration layer.
- The use of a centralized server is supported by research results that show that commercial-of-the-shelf personal computer technology is able to process of the order of 100,000 signalling messages per second between a centralized controller and several network devices, over TCP connections. There is a certain similarity to the OpenFlow technology, which also use one centralized server, which is called controller. The expected order of magnitude of control traffic in the proposed system is much smaller, i.e., a centralized server is sufficiently scalable. The centralised server is logically a centralized entity, but may of course be realized in a distributed way, e.g., to improve the resilience. Distributed realisations of the centralised server may also use load balancing.
- The invention provides an advantageous alternative or complement to the standardized, multi-vendor solution known as Virtual Private Local Area Network Service (= VPLS), if only IP connectivity is available. VPLS is based on MPLS. While VPLS is an appropriate solution whenever an MPLS link to each data center site is available, this requirement will not necessarily be fulfilled if a larger number of small data centers are used for cloud computing offers, or, e. g., distributed Content Delivery Network (= CDN) caches. In that case, at least a subset of sites may only be connected via IP links, or the public Internet. This implies that a pure MPLS-based solution may not be sufficient. This gap is covered by the present invention.
- Furthermore, the setup of a full mesh of MPLS paths is complex and limits the dynamics of the data center interconnection solution. Tunneling of MPLS over IP would result in additional overhead. The invention provides an improved solution which avoids the aforementioned disadvantages.
- The invention proposes a new technology to interconnect Ethernet networks over an IP network, using a centralized server in combination with overlay network mechanisms.
- One of the main benefits of the invention is its simplicity. The invention neither requires a complex setup of tunnels nor specific support by an interconnecting network. The invention makes it possible to interconnect data center Ethernet networks over any IP network, even without involvement of the network provider. Also, the use of a centralized server with a potentially global view on the Ethernet network simplifies the enforcement of policies and intelligent traffic distribution mechanisms.
- The service provided by the invention differs from other VPN solutions (VPN = Virtual Private Network). Unlike IPsec VPNs, this invention does not focus on encryption and thereby avoids the complexity of setting up the corresponding security associations (IPsec = Internet Protocol Security). Still, the invention can be natively implemented on top of IPsec. The invention also differs from tunneling solutions such as L2TP/ L2TPv3 and PPTP, as it is a soft-state solution only with no explicit tunnel setup (L2TP =
Layer 2 Tunneling Protocol; PPTP = Point-to-Point Tunneling Protocol). This results in less configuration overhead and the ability to scale to a large number of data center sites. - The invention does not use IP multicast or extended routing protocols, but a centralized server instead, which is simpler and enables centralized control and management. Most notably, the invention does not use extensions of the IS-IS routing protocol, operates on a per-destination-address basis, not on a per-flow basis, provides additional overlay topology management functions, and scales to large networks.
- The invention relies on a centralized server instead of proprietary routing protocol extensions. A centralized server is simpler to implement, deploy, and operate than an overlay that requires several IP multicast groups. It can also very easily be coupled with other control and management systems, e. g., for the dynamic configuration of policies.
- Compared to the existing data center interconnect solutions that use static tunnels or label switched paths, e. g. VPLS, the invention is much simpler to configure and implement, as the edge devices only require a minimum initial configuration and only maintain soft state for the traffic in the overlay. As in the framework of the invention it is easy to add and remove sites from the overlay, Ethernet interconnectivity can be offered even for a large number of highly distributed data center sites that are turned on and off frequently.
- Further advantages are achieved by embodiments of the invention indicated by the dependent claims.
- According to an embodiment of the invention, the control information is related to one or more of: mapping of Ethernet addresses of network devices of Ethernet LANs to IP addresses of customer edge devices, information concerning a scope of Ethernet LANs and/or VLAN tags, Address Resolution Protocol (ARP) information, membership information of multicast groups inside the Ethernet LANs, filtering policies, firewall rules, overlay topology, information about path characteristics between customer edge devices, bootstrapping and configuration information for devices joining an overlay network comprising the two or more Ethernet LANs.
- Instead of transporting control information inside a routing protocol between the customer edge devices, the inventive method uses a centralized server. Each customer edge device is connected to the centralised server by a control connection, preferably a TCP connection, and exchanges control information (TCP = Transmission Control Protocol). Specifically, this control connection transports
- ● mappings of Ethernet addresses to the IP addresses of customer edge devices,
- ● information concerning the scope of Ethernet VLANs,
- ● Address Resolution Protocol (ARP) information,
- ● membership information of multicast groups inside the data center network segments,
- ● filtering policies such as firewall rules,
- ● overlay topology and information about the path characteristics between the customer edge devices, and
- ● bootstrapping and configuration information for devices joining the overlay.
- The customer edge devices report information to the centralised server, which distributes the information then to the other customer edge devices, and preferably also maintains a global view of the whole data center network and the attachment of Ethernet devices in the different Ethernet segments. The control connections can also be encrypted, e.g. using the Transport Layer Security (= TLS), in order to protect the data integrity and preferably to enable an authentication and authorization of customer edge devices joining the overlay.
- According to another embodiment of the invention, the method further comprises the steps of reporting, by one or more of the customer edge devices, control information to the centralised server; managing, by the centralised server, the received control information and distributing processed control information to one or more of the customer edge devices including a first customer edge device associated with a first Ethernet LAN of the two or more Ethernet LANs; and using, by the first customer edge device, the received control information for controlling a transmission of Ethernet data traffic from a first network device of the first Ethernet LAN through the interconnecting IP network to a second network device of a second Ethernet LAN of the two or more Ethernet LANs.
- According to another embodiment of the invention, the method further comprises the steps of sending, by a first network device of a first Ethernet LAN of the two or more Ethernet LANs, an Ethernet packet destined for an Ethernet address of a second network device of a second Ethernet LAN of the two or more Ethernet LANs; receiving, by a first customer edge device associated with the first Ethernet LAN, the Ethernet packet and checking if a forwarding table managed by the first customer edge device contains a mapping of the Ethernet address of the second network device to an IP address of a customer edge device associated with the second Ethernet LAN; if the forwarding table does not contain the said mapping, sending by the first customer edge device an address resolution request to the centralised server and receiving from the centralised server in response to the address resolution request a reply message specifying the said mapping; encapsulating, by the first customer edge device, the Ethernet packet with an encapsulation header inside an IP packet comprising a destination address of the second customer edge device according to the mapping; sending the encapsulated Ethernet packet via the interconnecting IP network to the second customer edge device; and decapsulating, by the second customer edge device, the received Ethernet packet for delivery within the second Ethernet LAN to the second network device. The customer edge devices should drop packets with destination Ethernet addresses that cannot be resolved.
- The encapsulation header at least comprises an IP header. In addition, further shim layers may be used for encapsulation, most notably the User Datagram protocol (UDP) or the Generic Routing Encapsulation (GRE), or both.
- The customer edge devices tunnel Ethernet packets over the IP network by encapsulating them into IP packets, e.g. UDP packets, without requiring the explicit setup of tunnels (UDP = User Datagram Protocol). The IP addresses of the destination customer edge device are learned from the centralised server if they are not already locally known. Ethernet packets are then transported over the IP network to the destination customer edge devices, decapsulated there, and finally delivered to the destination Ethernet device inside the destination data center LAN.
- A UDP encapsulation of data plane packets and a TCP-based control connection to the centralised server works in environments where other protocols, such as IP multicast or routing protocols, are blocked. Other benefits of the invented architecture include:
- ● Auto-configuration: It is very simple to set up and configure the invented method. Adding a new data center site mainly requires the configuration of the address of the centralised server in the customer edge device of the new data center. The edge device then connects to the centralised server and obtains further required information about the overlay from the centralised server.
- ● Realization of highly dynamic virtual networks with simple policy enforcement: As the centralised server can keep track of the overlay network state, it can quickly react to changes, e.g. caused by mobility of Virtual Machines and enforce policies. The centralised server can also enforce specific routing schemes.
- ● Flexible overlay topology management: Due to performance measurements according to the invented method, an optimized traffic distribution between the data center sites is possible, e.g. by multi-hop routing.
- ● Central point of contact: As the centralised server has a global view of the network, it can easily be connected with other network or cloud control and management systems.
- ● Mitigation of address resolution message broadcast storms: The preferably used caching of address resolution information both in the server and in the customer edge devices reduces the need for Ethernet broadcasts and the resulting problems.
- In an embodiment, the method further comprises the steps of intercepting, by a first customer edge device associated with a first Ethernet LAN of the two or more Ethernet LANs, an Address Resolution Request (ARP) sent by a first network device of the first Ethernet LAN, if the first network device intends to resolve an IP address of a second network device located in a second Ethernet LAN to the corresponding Ethernet address, blocking the request if the address mapping of the IP address of the second network device to the Ethernet address of the second device is not known, and sending a corresponding lookup request from the first customer edge device to the centralised server; after receipt of the lookup request, forwarding by the centralised server the lookup request to all other customer edge devices except the first customer edge device; after receipt of the lookup request, distributing by the other customer edge devices, the lookup request among the network devices of the respective Ethernet LANs; receiving, by the other customer edge devices, lookup replies from the network devices of the respective Ethernet LANs and forwarding the lookup replies to the centralised server; managing and processing the received lookup replies by the centralised server and sending a lookup reply to the first customer edge device which had initiated the lookup request; and sending, by the first customer edge device, the lookup reply to the first network device which had initiated the address resolution request.
- According to another embodiment of the invention, the method further comprises the steps of announcing, by the centralised server, the lookup reply which is sent by the centralised server to the first customer edge device also to the other customer edge devices for their learning of addresses from the centralised server, i.e. so that they learn the addresses from the centralised server and can store them in an ARP table or in the forwarding table in the customer edge device, similar to an ARP proxy.
- According to another embodiment of the invention, the method further comprises the steps of measuring, by at least one of the customer edge devices, path characteristics and sending the measured path characteristics to the centralised server; establishing, by the centralised server, topology characteristics regarding the communication between the two or more Ethernet LANs on the basis of the received path characteristics; announcing, by the centralised server, the established topology characteristics to the customer edge devices; and making use of this information in routing decisions by at least one of the customer edge devices.
- According to another embodiment of the invention, in a case where the interconnecting IP network connects at least three Ethernet LANs, the method further comprises the steps of routing, on account of announced topology characteristics, an ongoing communication between a first and a second Ethernet LAN of the at least three Ethernet LANs via a third customer edge device of a third Ethernet LAN of the at least three Ethernet LANs.
- Using the topology information established by the centralised server, customer edge devices can also use more sophisticated forwarding and traffic engineering mechanisms. Specifically, embodiments of the invention allow a multi-hop forwarding in the overlay to move traffic away from congested links between two data center sites. In practice, two hops will be sufficient in most cases. The invention does not use IP multicast. Instead any multicast or broadcast traffic is duplicated in the customer edge devices and forwarded point-to-point in UDP datagrams to each customer edge device. This design, which is similar to the handling of such packets in VPLS, avoids problems in networks not supporting IP multicast.
- Most notably, the use of multi-hop forwarding allows bypassing a potentially congested link between two data center sites, if there is an alternative path. The global view of the network at the centralised server, as well as the distribution of path characteristic measurements to the customer edge devices enable a better load balancing and intelligent routing, also if sites are multi-homed. If there is an alternative uncongested path in the overlay, as shown in
Figure 6 below, the invention achieves a significantly larger throughput between data center sites compared to a solution that only uses point-to-point forwarding between the customer edge devices. - According to another embodiment of the invention, the centralised server further comprises a data base containing at least one mapping of an Ethernet address of a network device of one of the Ethernet LANs to an IP address of a customer edge device of the respective Ethernet LAN with which the network device is associated.
- According to another embodiment of the invention, the database of the centralised server further contains at least one address mapping of an Ethernet address of a network device of one of the Ethernet LANs to its corresponding IP address, so that the centralized server can answer Ethernet address lookup queries without Address Resolution Protocol broadcasts.
- According to another embodiment of the invention, the centralised server further comprises an interface to a network or cloud computing management system that provides for instance policies or monitors the overlay.
- According to another embodiment of the invention, the customer edge device further comprises a forwarding table containing at least one mapping of an Ethernet address of a network device of one of the at least one further Ethernet LAN to an IP address of the respective customer edge device of the at least one further Ethernet LAN with which the network device is associated.
- According to another embodiment of the invention, the customer edge device further comprises a path metering unit adapted to measure path characteristics and that the customer edge device is adapted to send the measured path characteristics to the centralised server.
- According to another embodiment of the invention, the customer edge device further comprises an address resolution proxy adapted to analyze an Address Resolution Request (ARP) sent by a network device of the Ethernet LAN in order to receive information related to the address mapping of IP and Ethernet addresses of a destination network device addressed in the ARP request. If the address mapping is not known yet by the customer edge device, the request is blocked and a corresponding lookup request is sent to the centralised server over the control connection. If the address mapping is already known from the ARP table in the customer edge device, a corresponding ARP reply is sent back to the network device. In both cases, the transport of the ARP messages over the overlay can be avoided.
- According to a preferred embodiment, the address resolution proxy learns address mappings of the IP and Ethernet addresses of the destination network device from the centralised server and directly replies to the intercepted Address Resolution Protocol request from the network device if the address mapping is already known. The address resolution proxy may also learn address mappings by other means, for instance by monitoring of ongoing traffic or additional ARP lookups.
- These as well as further features and advantages of the invention will be better appreciated by reading the following detailed description of exemplary embodiments taken in conjunction with accompanying drawings of which:
- Fig. 2
- is a diagram of the architecture of an overlay network according to the present invention;
- Fig. 3
- is a diagram showing the tunneling of an Ethernet packet between Ethernet LANs over IP;
- Fig. 4
- is a diagram of an Ethernet address resolution over a centralised server;
- Fig. 5
- is a diagram of collecting and distributing overlay topology information and performance measurements;
- Fig. 6
- is a diagram of a multi-hop routing in the overlay between different Ethernet LANs;
- Fig. 7
- is a diagram of the basic architecture of a customer edge device; and
- Fig. 8
- is a diagram of the basic architecture of a centralised server.
-
Fig. 2 shows an overlay network according to an embodiment of the present invention. The overlay network comprises three Ethernet LANs, LAN1, LAN2, LAN3, and an interconnecting network N. One or more of the Ethernet LANs may be connected to the interconnecting network N by a respective customer edge device, e.g., CE1, CE2, CE3. Each Ethernet LAN LAN1, LAN2, LAN3 comprisesserver farms 30 which are connected via Ethernet switches SW to the customer edge device CE1, CE2, CE3 of the respective Ethernet LAN LAN1, LAN2, LAN3. The interconnecting network N may be an IP network such as the Internet. The customer edge devices CE1, CE2, CE3 are interconnected via network links 22 for the transmission of data traffic packets. An Ethernet packet originating from a first Ethernet LAN LAN1 is transmitted via the network links 22 through the interconnecting network N to a second Ethernet LAN LAN2 in the form of an Ethernet-over-IP encapsulation 23, as is explained in more detail in connection withFig. 3 . - A key component of the overlay network is a
centralized server 10 that handles the exchange of control plane messages associated with a transmission of Ethernet packets between Ethernet LAN through the interconnecting network in an Ethernet-over-IP transmission mode. Therefore, unlike in prior art, no modifications of routing protocols etc. are required. The invention only requires some additional functionality in the customer edge devices CE1, CE2, CE3, as detailed below. Thecentralised server 10 can either be a stand-alone device, e.g. a high-performance personal computer, or it can be integrated in one of the customer edge devices, as indicated by the dotted outline of a box inFig. 2 , in which case thecentralised server 10 is a kind of master device for the overlay. Both alternative realizations can provide the same service. Each customer edge device CE1, CE2, CE3 maintains a control connection 21 - preferably a Transmission Control Protocol (= TCP) or a Transport Layer Security (= TLS) connection - to thecentralised server 10. Over these connections to each CE, information about the overlay are exchanged, including the mapping of Ethernet addresses to sites, the overlay topology, certain policies, etc. -
Fig. 3 illustrates, in the overlay network ofFig. 2 , the process of tunneling of an Ethernet packet between Ethernet LANs over IP, i.e. a data plane operation. A first network device A of a first Ethernet LAN LAN1 of the three Ethernet LANs LAN1, LAN2, LAN3 sends anEthernet packet 20. TheEthernet packet 20 contains as destination address an Ethernet address of a second network device B of a second Ethernet LAN LAN2 of the two or more Ethernet LANs LAN1, LAN2, LAN3, as source address the Ethernet address of the first network device A, and a payload. The customer edge device CE1 associated with the first Ethernet LAN LAN1 receives theEthernet packet 20 and determines from a forwarding table 31 managed by the first customer edge device CE1 a mapping of the Ethernet address of the second network device B to an IP address of a customer edge device CE2 associated with the second Ethernet LAN LAN2. The first customer edge device CE1 encapsulates theEthernet packet 20 with anIP header 24 comprising an IP address of the source customer edge device CE1, an IP address of the destination customer edge device CE2, and further header fields according to the chosen encapsulation protocol. The source customer edge device CE1 sends the encapsulatedEthernet packet 28 with theencapsulation header 24 via anetwork link 22 through the interconnecting IP network N to the destination customer edge device CE2. The second customer edge device CE2 decapsulates the receivedEthernet packet 20 for delivery within the second Ethernet LAN LAN2 to the second network device B. As a result, an end-to-end transfer 27 between the hosts A and B in the Ethernet LANs is achieved. - For all Ethernet addresses that are known to be located in other sites, the Ethernet packets are encapsulated into an IP encapsulation packet, e.g. an UDP packet, using an additional header, and then sent via IP to the IP address of the customer edge device at the destination Ethernet LAN. This data plane operation is similar to other tunnel solutions.
-
Fig. 4 illustrates, in the overlay network ofFig. 2 , an Ethernet address resolution over acentralised server 10, i.e. a control plane function. Anew data connection 40 is to be established from a first network device A of a first Ethernet LAN LAN1 of the two or more Ethernet LANs LAN 1, LAN2, LAN3 to a second network device B of a second Ethernet LAN LAN2 of the two or more Ethernet LANs LAN1, LAN2, LAN3. A first customer edge device CE1 associated with the first Ethernet LAN LAN1 blocks anaddress resolution request 41 sent by the first network device A and sends acorresponding lookup request 42 from the first customer edge device CE1 to thecentralised server 10, assuming that the address mapping is not already locally known in CE1. After receipt of thelookup request 42, thecentralised server 10forwards 43 the lookup request to all other customer edge devices CE2, CE3 except the source customer edge device, i.e. the first customer edge device CE1. Not shown inFigure 2 is that as alternative theserver 10 could also directly respond to the lookup request, if the address mapping is already known in its ARP table. After receipt of the forwardedlookup request 43, the other customer edge devices CE2, CE3 distribute thelookup request 44 as an ARP lookup among the network devices of the respective Ethernet LANs LAN2, LAN3. The other customer edge device CE2 associated with the Ethernet LAN LAN2 wherein the destination network device B is located receives the corresponding lookup reply from the destination network device B and forwards thelookup reply 46 to thecentralised server 10. Thecentralised server 10 manages and processes the receivedlookup reply 46 and sends alookup reply 47 to the first customer edge device CE1 which had initiated thelookup request 42. The first customer edge device CE1 sends thelookup reply 49 to the first network device A which had initiated theaddress resolution request 40. - Further, the
centralised server 10 announces 48 thelookup reply 47 which is sent by thecentralised server 10 to the first customer edge device CE1 also to the third customer edge device CE3 for its learning of addresses from thecentralised server 10. By storing this information in an ARP table, the other customer edge devices can in future answer address lookup queries and encapsulate and forward packets to those destinations without interacting with the server. - A customer edge device CE1, CE2, CE3 only forwards an Ethernet packet to the overlay if the destination address is known. The customer edge devices CE1, CE2, CE3 learn addresses from the
centralized server 10. The learning from thecentralized server 10 is one of the key differentiators compared to prior art systems. The invention does not need established multicast trees or routing protocol extensions. The address learning is handled as follows: - ● The customer edge device blocks the forwarding of ARP messages to the WAN interfaces.
- ● ARP lookups are handled by a corresponding protocol via the
centralized server 10. - ● ARP responses are sent back via the
centralised server 10, which announces addresses to all customer edge devices CE1, CE2, CE3. - ● The customer edge devices CE1, CE2, CE3 may incorporate an
ARP proxy 25 to reply to lookups learnt via the control plane. This requires an ARP table with corresponding address mappings and mechanisms to update and remove those entries, for instance by aging-out mechanisms. -
Fig. 5 illustrates, in the overlay network ofFig. 2 , performance and overlay measurement, collection of measurement data, announcement of path characteristics and distribution of overlay topology information. - A first data connection 50AB is established from a first network device A of a first Ethernet LAN LAN1 of the two or more Ethernet LANs LAN 1, LAN2, LAN3 to a second network device B of a second Ethernet LAN LAN2 of the two or more Ethernet LANs LAN 1, LAN2, LAN3. A second data connection 50AC is established from the first network device A to a third network device C of the second Ethernet LAN LAN2. A third data connection 50AD is established from the first network device A to a fourth network device D of a third Ethernet LAN LAN3 of the two or more Ethernet LANs LAN1, LAN2, LAN3.
-
Path metering units 26 of the customer edge devices CE1, CE2,CE3 measure 51 path characteristics of the data transmission paths 50AB, 50AC, 50AD from all known other customer edge devices CE1, CE2, CE3, e.g. by measuring packet loss, optionally also packet delay, and send 52 the measured path characteristics to thecentralised server 10, e.g. in the form of a path characteristics report. Thecentralised server 10 establishes topology characteristics regarding the data transmission, i.e. communication, between the three Ethernet LANs LAN1, LAN2, LAN3 on the basis of the received path characteristics. Thecentralised server 10 announces 53 the established topology characteristics to the customer edge devices CE1, CE2, CE3. At least one of the customer edge devices CE1, CE2, CE3 makes use of this information in subsequent routing decisions. - The method uses the
centralised server 10 to distribute delay and load information for all paths 50AB, 50AC, 50AD, in order to enable optimized overlay routing as described below. This measurement uses the following techniques: - ● At least one customer edge device measures the performance of the paths from all known other customer edge devices, i.e. the interface throughput when encapsulating packets. Note that, assuming predominantly TCP traffic, the throughput is a lower bound of the available path bandwidth.
- ● The customer edge devices may also send ICMP ping messages or other probe messages to all known other customer edge devices (ICMP = Internet Control Message Protocol).
- ● The customer edge devices periodically report the path characteristics per destination customer edge to the centralised server.
- ● The centralised server maintains an overlay topology map, i.e. it stores the available bandwidth, delay, and loss on all overlay paths.
- ● The centralised server announces the topology characteristics to all customer edge devices. The customer edge devices may use this information for multi-hop routing or also for load balancing at and/or towards multi-homed sites.
-
Fig. 6 illustrates, in the overlay network ofFig. 5 , a multi-hop routing in the overlay between different Ethernet LANs. - Of three ongoing data transmission paths 60AB, 60AC, 60AD, two paths 60AB, 60AC suffer from a
congestion 61 in the interconnecting network N, namely a first path 60AB between the network device A in a first Ethernet LAN LAN1 and a second network device B of a second Ethernet LAN LAN2, and a second path 60AC between the network device A in the first Ethernet LAN LAN1 and a third network device C of the second Ethernet LAN LAN2. From path measurements, e.g. from ICMP pings, by means of apath metering unit 26 of the customer edge device CE1 connecting the first Ethernet LAN LAN1 to the interconnecting network N, the customer edge device CE1 notices 62 a loss and/or delay of Ethernet packets transmitted on these congested paths 60AB, 60AC. Alternatively, the problem could also be noticed by CE2. Triggered by a corresponding control message reporting the congestion sent via the control connection from the customer edge device CE1 to thecentralised server 10, thecentralised server 10, based on its established topology characteristics of the overlay network, announces 63 that the third ongoing data transmission path 60AD between the network device A in a path between a third customer edge device CE3 of a third Ethernet LAN LAN3 and a second customer edge device CE2 of the second Ethernet LAN LAN2 is not congested. - Consequently the first customer edge device CE1 of the first Ethernet LAN LAN1 sends 64 at least a part of the data traffic from the congested data transmission paths 60AB, 60AC, namely the data traffic from the congested data transmission path 60AB, to the third customer edge device CE3. Subsequently, the third customer edge device CE3 forwards 65 the packet to the destination address of the final destination to the second customer edge device CE2. This can be achieved by decapsulating the received Ethernet packets and encapsulating them again with the new destination address. This way the data traffic between the network devices A and B is re-routed 66 via the second customer edge device CE2.
- Embodiments of the invention achieve an overlay multi-hop routing. Such overlay routing is not considered by prior art data center interconnect solutions. Multi-hop routing in the overlay between the sites can work around congestion or suboptimal IP routing on the direct path, if there are more than two sites attached to the overlay. This preferably triangular re-routing can result in a larger delay, but still may be beneficial to improve the overall throughput. Yet, a fundamental challenge is loop prevention. The overlay routing in ECO is realized as follows:
- ● The method only supports two forwarding hops in overlay, in order to avoid complex loops.
- ● The first hop of a 2-hop tunnel is marked in the tunnel header, for instance by a bit flag; if the bit is set, the first customer edge device decapsulates and encapsulates packets again. An alternative solution, which does not require any header bits, is that the encapsulating customer edge device just uses two nested tunnels.
- ● The first hop never forwards packets back to the source site.
-
Fig. 7 illustrates an embodiment of a customer edge device CE. The customer edge device CE comprises afirst interface 71 for a TCP connection to a centralised server, at least onesecond interface 72 to an Ethernet LAN, preferably in the form of a data center, and at least onethird interface 73 to the interconnecting IP network, i.e. the overlay. The customer edge device CE comprises aprotocol engine 74 for managing a protocol used for the control message exchange with a centralised server of the overlay network. The customer edge device CE comprises a forwarding table 31, anARP proxy 25, apath meter unit 26, anEthernet switching unit 78 and anencapsulation unit 79 that encapsulates the Ethernet packets in IP packets and that adds further shim protocols if required for the transport over the WAN. The forwarding table 31 comprises mappings between entries in a first section 311 with Ethernet addresses of destinations, in asecond section 312 with local interfaces, and in athird section 313 with IP addresses of target customer edge devices. The forwarding table 31, theprotocol engine 74, theARP proxy 25, and thepath meter unit 26 are located in aslow path part 701 of the customer edge device CE, whereas theEthernet switching unit 78 and anencapsulation unit 79 are located in afast path part 702 of the customer edge device CE. -
Fig. 7 illustrates the main and additional functional components of a customer edge device, which is typically a router but acts as an Ethernet switch/bridge towards the internal network interface or interfaces. Preferably, the important functions are: - ● Encapsulation/decapsulation of Ethernet packets in IP, adding an additional header, preferably on top of UDP
- ● Extension of the forwarding data base for remote nodes by the IP address of the destination customer edge device
- ● Control plane learning of Ethernet addresses and ARP proxy
- ● Packet filtering and dropping of Ethernet packets to unknown destination Ethernet addresses
- ● Path characteristic measurement and overlay routing functions
- ● Communication with the centralised server over a TCP connection
-
Fig. 8 illustrates an embodiment of acentralised server 10. Thecentralised server 10 comprises at least afirst interface 81 for a TCP connection to a first customer edge device CE1 and asecond interface 82 for a TCP connection to a second customer edge device CE2. Thecentralised server 10 may further comprise athird interface 83 to a network management system or a cloud computing management system. Thecentralised server 10 further comprises a global policies anddecision logic 84, adata base 85 mapping Ethernet addresses to IP addresses of customer edge devices CE1, CE2, adata base 86 containing overlay topology and path characteristics, aserver function unit 87, and a first and asecond protocol engine -
Fig. 8 shows the main and additional functions of the centralised server. The centralised server is on the one hand a centralized control and policy decision point, and, on the other hand, a mirroring server that distributes information from the individual customer edge devices in the overlay. The functions can be summarized as follows: - ● Distribution of information between all customer edge devices, including Ethernet addresses and their mapping to sites, the scope of VLAN tags, joining of new sites, topology and path characteristics, etc.
- ● Centralized caching of the mapping of Ethernet and IP addresses of hosts in the LANs attached to the overlay and distribution of that information to ARP proxies in the CE devices.
- ● Centralized configuration of policies.
- ● Preferably an external interface to cloud management system.
Claims (17)
- A method of transmitting Ethernet packets between two or more Ethernet LANs (LAN1, LAN2, LAN3) through an interconnecting IP network (N), each of the Ethernet LANs being connected to the interconnecting IP network (N) by means of one or more respective customer edge devices (CE1, CE2, CE3), the method comprising the step of:processing and controlling, by a centralised server (10) connected to a plurality of the customer edge devices (CE1, CE2, CE3) via a control connection (21), an exchange of control information associated with the Ethernet packet transmission, wherein the said exchange is between the customer edge devices (CE1, CE2, CE3) of the two or more Ethernet LANs (LAN1, LAN2, LAN3).
- The method according to claim 1,
characterised in
that the control information is related to one or more of: mapping of Ethernet addresses of network devices of Ethernet LANs to IP addresses of customer edge devices, host address resolution information corresponding to the Address Resolution Protocol, information concerning a scope of Ethernet LANs and/or VLAN tags, membership information of multicast groups inside the Ethernet LANs, filtering policies, firewall rules, overlay topology, information about path characteristics between customer edge devices, bootstrapping and configuration information for devices joining an overlay network comprising the two or more Ethernet LANs. - The method according to claim 1,
characterised in
that the method further comprises the steps of:reporting, by one or more of the customer edge devices (CE1, CE2, CE3), control information to the centralised server (10);managing, by the centralised server (10), the received control information and distributing processed control information to one ormore of the customer edge devices (CE1, CE2, CE3) including a first customer edge device (CE1) associated with a first Ethernet LAN (LAN1) of the two or more Ethernet LANs (LAN1, LAN2, LAN3); andusing, by the first customer edge device (CE1), the received control information for controlling a transmission of Ethernet data traffic from a first network device (A) of the first Ethernet LAN (LAN1) through the interconnecting IP network (N) to a second network device (B) of a second Ethernet LAN (LAN2) of the two or more Ethernet LANs (LAN 1, LAN2, LAN3). - The method according to claim 1,
characterised in
that the method further comprises the steps of:sending, by a first network device (A) of a first Ethernet LAN (LAN1) of the two or more Ethernet LANs (LAN1, LAN2, LAN3), an Ethernet packet (20) destined for an Ethernet address of a second network device (B) of a second Ethernet LAN (LAN2) of the two or more Ethernet LANs (LAN1, LAN2, LAN3);receiving, by a first customer edge device (CE1) associated with the first Ethernet LAN (LAN1), the Ethernet packet (20) and checking if a forwarding table (31) managed by the first customer edge device (CE1) contains a mapping of the Ethernet address of the second network device (B) to an IP address of a customer edge device (CE2) associated with the second Ethernet LAN (LAN2);if the forwarding table (31) does not contain the said mapping, sending by the first customer edge device (CE1) an address resolution request to the centralised server (10) and receiving from the centralised server (10) in response to the address resolution request a reply message specifying the said mapping;encapsulating, by the first customer edge device (CE1), the Ethernet packet (20) with an encapsulation header comprising a destination address of the second customer edge device (CE2) according to the mapping;sending the encapsulated Ethernet packet (28) via the interconnecting IP network (N) to the second customer edge device (CE2); anddecapsulating, by the second customer edge device (CE2), the received encapsulated Ethernet packet (28) for delivery within the second Ethernet LAN (LAN2) to the second network device (B). - The method according to claim 1,
characterised in
that the method further comprises the steps of:intercepting, by a first customer edge device (CE1) associated with a first Ethernet LAN (LAN1) of the two or more Ethernet LANs (LAN 1, LAN2, LAN3), an address resolution request sent by a first network device (A) of the first Ethernet LAN (LAN1) and sending a corresponding lookup request from the first customer edge device (CE1) to the centralised server (10) if an address mapping associated with the address resolution request is not known;after receipt of the lookup request, forwarding by the centralised server (10) the lookup request to all other customer edge devices (CE2, CE3) except the first customer edge device (CE1);after receipt of the lookup request, distributing by the other customer edge devices (CE2, CE3), the lookup request among the network devices (B) of the respective Ethernet LANs (LAN2, LAN3);receiving, by the other customer edge devices (CE2, CE3), lookup replies from the network devices (B) of the respective Ethernet LANs (LAN2, LAN3) and forwarding the lookup replies to the centralised server (10);managing and processing the received lookup replies by the centralised server (10) and sending a lookup reply to the first customer edge device (CE1) which had initiated the lookup request; andsending, by the first customer edge device (CE1), the lookup reply to the first network device (A) which had initiated the address resolution request. - The method according to claim 5,
characterised in
that the method further comprises the steps of:announcing, by the centralised server (10), the lookup reply which is sent by the centralised server (10) to the first customer edge device (CE1) also to the other customer edge devices (CE2, CE3) for their learning of addresses from the centralised server (10). - The method according to claim 1,
characterised in
that the method further comprises the steps of:measuring, by at least one of the customer edge devices (CE1, CE2, CE3), path characteristics and sending the measured path characteristics to the centralised server (10);establishing, by the centralised server (10), topology characteristics regarding the communication between the two or more Ethernet LANs (LAN1, LAN2, LAN3) on the basis of the received path characteristics;announcing, by the centralised server (10), the established topology characteristics to the customer edge devices (CE1, CE2, CE3); andmaking use of this information in routing decisions by at least one of the customer edge devices (CE1, CE2, CE3). - The method according to claim 7,
characterised in
that the interconnecting IP network (N) connects at least three Ethernet LANs (LAN1, LAN2, LAN3), whereby the method further comprises the steps of:on account of announced topology characteristics, routing an ongoing data traffic transmission between a first and a second Ethernet LAN of the at least three Ethernet LANs (LAN1, LAN2, LAN3) via a third customer edge device (CE3) of a third Ethernet LAN (LAN3) of the at least three Ethernet LANs (LAN1, LAN2, LAN3). - A centralised server (10) of an overlay network with two or more Ethernet LANs (LAN1, LAN2, LAN3) and an interconnecting IP network (N), the centralised server (10) comprising two or more interfaces (81, 82) for connecting the centralised server (10) via control connections (21) to respective customer edge devices (CE1, CE2, CE3), each of the customer edge devices (CE1, CE2, CE3) connecting one or more associated Ethernet LANs (LAN1, LAN2, LAN3) to the interconnecting IP network (N), whereby the centralised server (10) is adapted to process and control an exchange of control information exchanged between the customer edge devices (CE1, CE2, CE3), the exchanged control information being associated with a transmission of Ethernet packets between two or more of the two or more Ethernet LANs (LAN1, LAN2, LAN3) through the interconnecting IP network (N).
- The centralised server (10) according to claim 9,
characterised in
that the centralised server (10) further comprises a data base (85) containing at least one mapping of an Ethernet address of a network device of one of the Ethernet LANs (LAN1, LAN2, LAN3) to an IP address of a customer edge device of the respective Ethernet LAN (LAN2) with which the network device is associated. - The centralised server (10) according to claim 10,
characterised in
that the data base (85) in the centralized server (10) further contains at least one address mapping of an Ethernet address of a network device of one of the Ethernet LANs (LAN1, LAN2, LAN3) to its corresponding IP address, so that the centralized server can answer Ethernet address lookup queries without Address Resolution Protocol broadcasts. - The centralised server (10) according to claim 9,
characterised in
that the centralised server (10) further comprises an interface (83) to a network or cloud computing management system. - A customer edge device (CE1) associated with one or more Ethernet LANs (LAN 1), the customer edge device (CE1) comprising at least one Ethernet interface to the Ethernet LAN (LAN 1), at least one data traffic interface to an interconnecting IP network (N) interconnecting the Ethernet LAN (LAN 1) with at least one further Ethernet LAN (LAN2, LAN3) for a transmission of Ethernet packets between the Ethernet LAN (LAN 1) and the at least one further Ethernet LAN (LAN2, LAN3) via the interconnecting IP network (N), and a control information interface to a centralised server (10) for exchange of control information associated with the Ethernet packet transmission via a control connection (21) wherein the control information exchanged between the customer edge device (CE1) and respective customer edge devices (CE2, CE3) of the at least one further Ethernet LAN (LAN2, LAN3) is sent to and received from the centralised server (10) through the control information interface.
- The customer edge device (CE1) according to claim 13,
characterised in
that the customer edge device (CE1) further comprises a forwarding table (31) containing at least one mapping of an Ethernet address of a network device of one of the at least one further Ethernet LAN (LAN2, LAN3) to an IP address of the respective customer edge device (CE2, CE3) of the at least one further Ethernet LAN (LAN2, LAN3) with which the network device is associated. - The customer edge device (CE1) according to claim 13,
characterised in
that the customer edge device (CE1) further comprises a path metering unit (26) adapted to measure path characteristics and that the customer edge device (CE1) is adapted to send the measured path characteristics to the centralised server (10). - The customer edge device (CE1) according to claim 13,
characterised in
that the customer edge device (CE1) further comprises an address resolution proxy (25) adapted to intercept an Address Resolution Protocol request sent by a network device (A) of the Ethernet LAN (LAN1) and send a corresponding lookup request to the centralised server (10) if it does not know the address mapping of IP and Ethernet addresses of a destination network device (B) addressed in the Address Resolution Protocol request, and send a reply to the network device (A) once the address mapping is retrieved from the server (10). - The customer edge device (CE1) according to claim 16,
characterised in
that the address resolution proxy (25) learns address mappings of the IP and Ethernet addresses of the destination network device (B) from the centralised server (10) and directly replies to the intercepted Address Resolution Protocol request from the network device (A) if the address mapping is already known from its ARP table.
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP11005588.6A EP2547047B1 (en) | 2011-07-08 | 2011-07-08 | Centralized system for routing ethernet packets over an internet protocol network |
CN201280033883.5A CN103650427B (en) | 2011-07-08 | 2012-06-22 | For routeing the integrated system of Ethernet packet on Internet protocol network |
PCT/EP2012/062126 WO2013007496A1 (en) | 2011-07-08 | 2012-06-22 | Centralized system for routing ethernet packets over an internet protocol network |
KR1020147000524A KR20140027455A (en) | 2011-07-08 | 2012-06-22 | Centralized system for routing ethernet packets over an internet protocol network |
US14/128,303 US20140133354A1 (en) | 2011-07-08 | 2012-06-22 | Method of transmitting ethernet packets |
JP2014517616A JP6009553B2 (en) | 2011-07-08 | 2012-06-22 | A centralized system for routing Ethernet packets over Internet protocol networks |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP11005588.6A EP2547047B1 (en) | 2011-07-08 | 2011-07-08 | Centralized system for routing ethernet packets over an internet protocol network |
Publications (2)
Publication Number | Publication Date |
---|---|
EP2547047A1 true EP2547047A1 (en) | 2013-01-16 |
EP2547047B1 EP2547047B1 (en) | 2016-02-17 |
Family
ID=44774272
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP11005588.6A Not-in-force EP2547047B1 (en) | 2011-07-08 | 2011-07-08 | Centralized system for routing ethernet packets over an internet protocol network |
Country Status (6)
Country | Link |
---|---|
US (1) | US20140133354A1 (en) |
EP (1) | EP2547047B1 (en) |
JP (1) | JP6009553B2 (en) |
KR (1) | KR20140027455A (en) |
CN (1) | CN103650427B (en) |
WO (1) | WO2013007496A1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2806601A1 (en) * | 2013-05-22 | 2014-11-26 | Fujitsu Limited | Tunnels between virtual machines |
WO2015080092A1 (en) * | 2013-11-26 | 2015-06-04 | 日本電気株式会社 | Network control device, network system, network control method, and program |
CN104734874A (en) * | 2013-12-20 | 2015-06-24 | 华为技术有限公司 | Method and device for confirming network failures |
CN105144652A (en) * | 2013-01-24 | 2015-12-09 | 惠普发展公司,有限责任合伙企业 | Address resolution in software-defined networks |
CN108632147A (en) * | 2013-06-29 | 2018-10-09 | 华为技术有限公司 | Message multicast processing method and device |
Families Citing this family (56)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10880162B1 (en) | 2012-07-06 | 2020-12-29 | Cradlepoint, Inc. | Linking logical broadcast domains |
US10177957B1 (en) | 2012-07-06 | 2019-01-08 | Cradlepoint, Inc. | Connecting a cloud network to the internet |
US10135677B1 (en) * | 2012-07-06 | 2018-11-20 | Cradlepoint, Inc. | Deployment of network-related features over cloud network |
US10601653B2 (en) | 2012-07-06 | 2020-03-24 | Cradlepoint, Inc. | Implicit traffic engineering |
US10560343B1 (en) | 2012-07-06 | 2020-02-11 | Cradlepoint, Inc. | People centric management of cloud networks via GUI |
US10110417B1 (en) * | 2012-07-06 | 2018-10-23 | Cradlepoint, Inc. | Private networks overlaid on cloud infrastructure |
US9992062B1 (en) | 2012-07-06 | 2018-06-05 | Cradlepoint, Inc. | Implicit traffic engineering |
CN102868615B (en) * | 2012-09-17 | 2016-04-20 | 瑞斯康达科技发展股份有限公司 | The method and system of message transmission between a kind of local area network (LAN) |
US9577979B1 (en) * | 2012-11-14 | 2017-02-21 | Viasat, Inc. | Local name resolution |
US9882713B1 (en) | 2013-01-30 | 2018-01-30 | vIPtela Inc. | Method and system for key generation, distribution and management |
JP6032031B2 (en) * | 2013-01-30 | 2016-11-24 | 富士通株式会社 | Relay device |
US9344403B2 (en) * | 2013-03-15 | 2016-05-17 | Tempered Networks, Inc. | Industrial network security |
US9197551B2 (en) | 2013-03-15 | 2015-11-24 | International Business Machines Corporation | Heterogeneous overlay network translation for domain unification |
JP6175848B2 (en) * | 2013-03-28 | 2017-08-09 | 日本電気株式会社 | Communication method, communication system, and communication program |
US10778680B2 (en) | 2013-08-02 | 2020-09-15 | Alibaba Group Holding Limited | Method and apparatus for accessing website |
US9311140B2 (en) | 2013-08-13 | 2016-04-12 | Vmware, Inc. | Method and apparatus for extending local area networks between clouds and migrating virtual machines using static network addresses |
US9430256B2 (en) | 2013-08-13 | 2016-08-30 | Vmware, Inc. | Method and apparatus for migrating virtual machines between cloud computing facilities using multiple extended local virtual networks and static network addresses |
US9391801B2 (en) * | 2013-08-13 | 2016-07-12 | Vmware, Inc. | Virtual private networks distributed across multiple cloud-computing facilities |
US9329894B2 (en) | 2013-08-13 | 2016-05-03 | Vmware, Inc. | Method and apparatus for extending local area networks between clouds and permanently migrating virtual machines using static network addresses |
US10142254B1 (en) | 2013-09-16 | 2018-11-27 | Cisco Technology, Inc. | Service chaining based on labels in control and forwarding |
EP2854377B1 (en) * | 2013-09-27 | 2016-07-13 | Alcatel Lucent | A method for centralized address resolution |
US20150100670A1 (en) | 2013-10-04 | 2015-04-09 | International Business Machines Corporation | Transporting multi-destination networking traffic by sending repetitive unicast |
US9584418B2 (en) * | 2013-10-10 | 2017-02-28 | International Business Machines Corporation | Quantized congestion notification for computing environments |
US9467478B1 (en) | 2013-12-18 | 2016-10-11 | vIPtela Inc. | Overlay management protocol for secure routing based on an overlay network |
CN105306284A (en) * | 2014-05-27 | 2016-02-03 | 中兴通讯股份有限公司 | Method and device for detecting connectivity of user network interface of virtual private network |
US9525627B2 (en) * | 2014-05-27 | 2016-12-20 | Google Inc. | Network packet encapsulation and routing |
US9590911B2 (en) | 2014-06-27 | 2017-03-07 | iPhotonix | Wireless area network (WAN) overloading |
US9794172B2 (en) | 2014-06-27 | 2017-10-17 | iPhotonix | Edge network virtualization |
US9565277B2 (en) * | 2014-06-27 | 2017-02-07 | iPhotonix | Dual-homed external network access in a distributed internet protocol (IP) router |
US9979698B2 (en) | 2014-06-27 | 2018-05-22 | iPhotonix | Local internet with quality of service (QoS) egress queuing |
WO2016019183A1 (en) | 2014-07-30 | 2016-02-04 | Tempered Networks, Inc. | Performing actions via devices that establish a secure, private network |
US10462058B2 (en) * | 2014-10-24 | 2019-10-29 | Telefonaktiebolaget Lm Ericsson (Publ) | Multicast traffic management in an overlay network |
US9762545B2 (en) * | 2014-11-03 | 2017-09-12 | Cisco Technology, Inc. | Proxy forwarding of local traffic by edge devices in a multi-homed overlay virtual private network |
US9300635B1 (en) | 2015-06-15 | 2016-03-29 | Tempered Networks, Inc. | Overlay network with position independent insertion and tap points |
US9980303B2 (en) | 2015-12-18 | 2018-05-22 | Cisco Technology, Inc. | Establishing a private network using multi-uplink capable network devices |
US11438417B2 (en) | 2016-03-02 | 2022-09-06 | Nec Corporation | Network system, terminal, sensor data collection method, and program |
EP3425854B1 (en) | 2016-03-02 | 2021-10-20 | Nec Corporation | Network system, control device, method for building virtual network, and program |
CN105812502A (en) * | 2016-03-07 | 2016-07-27 | 北京工业大学 | OpenFlow-based implementation method for address resolution protocol proxy technology |
US10333962B1 (en) | 2016-03-30 | 2019-06-25 | Amazon Technologies, Inc. | Correlating threat information across sources of distributed computing systems |
US10320750B1 (en) * | 2016-03-30 | 2019-06-11 | Amazon Technologies, Inc. | Source specific network scanning in a distributed environment |
US9729581B1 (en) | 2016-07-01 | 2017-08-08 | Tempered Networks, Inc. | Horizontal switch scalability via load balancing |
US10432515B1 (en) * | 2016-10-05 | 2019-10-01 | Cisco Technology, Inc. | Reducing number of Ethernet segment MPLS labels for all-active multi-homing |
CN108259302B (en) * | 2017-10-31 | 2021-04-27 | 新华三技术有限公司 | Method and device for realizing centralized gateway networking |
US11038923B2 (en) * | 2018-02-16 | 2021-06-15 | Nokia Technologies Oy | Security management in communication systems with security-based architecture using application layer security |
US10069726B1 (en) | 2018-03-16 | 2018-09-04 | Tempered Networks, Inc. | Overlay network identity-based relay |
US10721159B2 (en) * | 2018-04-25 | 2020-07-21 | Hewlett Packard Enterprise Development Lp | Rebuilt flow events |
US10116539B1 (en) | 2018-05-23 | 2018-10-30 | Tempered Networks, Inc. | Multi-link network gateway with monitoring and dynamic failover |
US10158545B1 (en) | 2018-05-31 | 2018-12-18 | Tempered Networks, Inc. | Monitoring overlay networks |
US10348570B1 (en) * | 2018-08-30 | 2019-07-09 | Accenture Global Solutions Limited | Dynamic, endpoint configuration-based deployment of network infrastructure |
US10992635B2 (en) * | 2018-10-17 | 2021-04-27 | ColorTokens, Inc. | Establishing connection between different overlay networks using edge application gateway |
US10708770B1 (en) * | 2019-06-06 | 2020-07-07 | NortonLifeLock Inc. | Systems and methods for protecting users |
US10911418B1 (en) | 2020-06-26 | 2021-02-02 | Tempered Networks, Inc. | Port level policy isolation in overlay networks |
US11070594B1 (en) | 2020-10-16 | 2021-07-20 | Tempered Networks, Inc. | Applying overlay network policy based on users |
US10999154B1 (en) | 2020-10-23 | 2021-05-04 | Tempered Networks, Inc. | Relay node management for overlay networks |
CN115696490A (en) * | 2021-07-23 | 2023-02-03 | 中兴通讯股份有限公司 | Local area network communication method, device, terminal, electronic equipment and storage medium |
US20240154936A1 (en) * | 2022-11-09 | 2024-05-09 | Charter Communications Operating, Llc | Proxy address resolution protocol for distributed local area network communications |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0915594A2 (en) * | 1997-10-07 | 1999-05-12 | AT&T Corp. | Method for route selection from a central site |
EP1408659A1 (en) * | 2002-10-07 | 2004-04-14 | NTT DoCoMo, Inc. | Routing control system, routing control device, transfer device and routing control method |
EP1414199A1 (en) * | 2002-10-23 | 2004-04-28 | NTT DoCoMo, Inc. | Routing control system, routing control device, and routing control method |
EP1580940A1 (en) * | 2004-03-25 | 2005-09-28 | AT&T Corp. | Method, apparatus and computer readable medium storing a software program for selecting routes to be distributed within networks |
EP1701491A1 (en) * | 2005-03-08 | 2006-09-13 | AT&T Corp. | Method and apparatus for providing dynamic traffic control within a communications network |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6016319A (en) * | 1995-10-31 | 2000-01-18 | Lucent Technologies, Inc. | Communications system for transmission of datagram packets over connection-oriented networks |
JP3097581B2 (en) * | 1996-12-27 | 2000-10-10 | 日本電気株式会社 | Ad-hoc local area network configuration method, communication method and terminal |
JP2000332817A (en) * | 1999-05-18 | 2000-11-30 | Fujitsu Ltd | Packet processing unit |
US8051211B2 (en) * | 2002-10-29 | 2011-11-01 | Cisco Technology, Inc. | Multi-bridge LAN aggregation |
US7631100B2 (en) * | 2003-10-07 | 2009-12-08 | Microsoft Corporation | Supporting point-to-point intracluster communications between replicated cluster nodes |
WO2006093299A1 (en) * | 2005-03-04 | 2006-09-08 | Nec Corporation | Tunneling device, tunnel frame sorting method used for the device, and its program |
JP4328312B2 (en) * | 2005-05-16 | 2009-09-09 | 日本電信電話株式会社 | VPN service providing method and optical path establishment method |
JP4692258B2 (en) * | 2005-12-07 | 2011-06-01 | 株式会社日立製作所 | Router device and communication system |
JP4602950B2 (en) * | 2006-08-08 | 2010-12-22 | 日本電信電話株式会社 | VPN service management method |
JP4758387B2 (en) * | 2007-04-26 | 2011-08-24 | 日本電信電話株式会社 | Data packet transfer control method, system and program |
JP2009100162A (en) * | 2007-10-16 | 2009-05-07 | Kddi Corp | Communication quality controller and computer program |
US8489750B2 (en) * | 2008-02-28 | 2013-07-16 | Level 3 Communications, Llc | Load-balancing cluster |
US9940208B2 (en) * | 2009-02-27 | 2018-04-10 | Red Hat, Inc. | Generating reverse installation file for network restoration |
SE533821C2 (en) * | 2009-06-12 | 2011-01-25 | Peter Olov Lager | Systems for measuring telecommunication quality with operator-common test equipment |
-
2011
- 2011-07-08 EP EP11005588.6A patent/EP2547047B1/en not_active Not-in-force
-
2012
- 2012-06-22 CN CN201280033883.5A patent/CN103650427B/en not_active Expired - Fee Related
- 2012-06-22 JP JP2014517616A patent/JP6009553B2/en not_active Expired - Fee Related
- 2012-06-22 KR KR1020147000524A patent/KR20140027455A/en not_active Application Discontinuation
- 2012-06-22 US US14/128,303 patent/US20140133354A1/en not_active Abandoned
- 2012-06-22 WO PCT/EP2012/062126 patent/WO2013007496A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0915594A2 (en) * | 1997-10-07 | 1999-05-12 | AT&T Corp. | Method for route selection from a central site |
EP1408659A1 (en) * | 2002-10-07 | 2004-04-14 | NTT DoCoMo, Inc. | Routing control system, routing control device, transfer device and routing control method |
EP1414199A1 (en) * | 2002-10-23 | 2004-04-28 | NTT DoCoMo, Inc. | Routing control system, routing control device, and routing control method |
EP1580940A1 (en) * | 2004-03-25 | 2005-09-28 | AT&T Corp. | Method, apparatus and computer readable medium storing a software program for selecting routes to be distributed within networks |
EP1701491A1 (en) * | 2005-03-08 | 2006-09-13 | AT&T Corp. | Method and apparatus for providing dynamic traffic control within a communications network |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105144652A (en) * | 2013-01-24 | 2015-12-09 | 惠普发展公司,有限责任合伙企业 | Address resolution in software-defined networks |
EP2949093A4 (en) * | 2013-01-24 | 2016-08-10 | Hewlett Packard Entpr Dev Lp | Address resolution in software-defined networks |
EP2806601A1 (en) * | 2013-05-22 | 2014-11-26 | Fujitsu Limited | Tunnels between virtual machines |
CN108632147A (en) * | 2013-06-29 | 2018-10-09 | 华为技术有限公司 | Message multicast processing method and device |
CN108632147B (en) * | 2013-06-29 | 2022-05-13 | 华为技术有限公司 | Message multicast processing method and device |
WO2015080092A1 (en) * | 2013-11-26 | 2015-06-04 | 日本電気株式会社 | Network control device, network system, network control method, and program |
US10063420B2 (en) | 2013-11-26 | 2018-08-28 | Nec Corporation | Network control apparatus, network system, network control method, and program |
CN104734874A (en) * | 2013-12-20 | 2015-06-24 | 华为技术有限公司 | Method and device for confirming network failures |
CN104734874B (en) * | 2013-12-20 | 2018-04-27 | 华为技术有限公司 | A kind of method and device of definite network failure |
Also Published As
Publication number | Publication date |
---|---|
JP2014523173A (en) | 2014-09-08 |
CN103650427A (en) | 2014-03-19 |
EP2547047B1 (en) | 2016-02-17 |
KR20140027455A (en) | 2014-03-06 |
WO2013007496A1 (en) | 2013-01-17 |
CN103650427B (en) | 2016-08-17 |
JP6009553B2 (en) | 2016-10-19 |
US20140133354A1 (en) | 2014-05-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2547047B1 (en) | Centralized system for routing ethernet packets over an internet protocol network | |
CA3066459C (en) | Service peering exchange | |
US9912614B2 (en) | Interconnection of switches based on hierarchical overlay tunneling | |
US12047285B2 (en) | Low-overhead routing | |
US7486659B1 (en) | Method and apparatus for exchanging routing information between virtual private network sites | |
US8037303B2 (en) | System and method for providing secure multicasting across virtual private networks | |
EP2579544B1 (en) | Methods and apparatus for a scalable network with efficient link utilization | |
EP3809641A1 (en) | Improved port mirroring over evpn vxlan | |
US20130163594A1 (en) | Overlay-Based Packet Steering | |
EP4173239B1 (en) | Layer-2 network extension over layer-3 network using encapsulation | |
US10848414B1 (en) | Methods and apparatus for a scalable network with efficient link utilization | |
Jain | LAN Extension and Network Virtualization in Cloud Data Centers | |
Phung et al. | Internet acceleration with lisp traffic engineering and multipath tcp | |
CN115604056A (en) | Efficient storage implementation of downstream VXLAN identifiers |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20120217 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
RBV | Designated contracting states (corrected) |
Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
111Z | Information provided on other rights and legal means of execution |
Free format text: AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR Effective date: 20130410 |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: ALCATEL LUCENT |
|
D11X | Information provided on other rights and legal means of execution (deleted) | ||
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: H04L 29/06 20060101ALI20150901BHEP Ipc: H04L 12/717 20130101ALI20150901BHEP Ipc: H04L 29/12 20060101ALI20150901BHEP Ipc: H04L 12/46 20060101AFI20150901BHEP |
|
INTG | Intention to grant announced |
Effective date: 20151007 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: REF Ref document number: 776083 Country of ref document: AT Kind code of ref document: T Effective date: 20160315 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602011023329 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20160217 |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG4D |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 776083 Country of ref document: AT Kind code of ref document: T Effective date: 20160217 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 6 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160518 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160217 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160517 Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160217 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160217 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160217 Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160217 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160217 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160617 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160217 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160217 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160217 Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160217 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160217 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160217 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602011023329 Country of ref document: DE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160217 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160217 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160217 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160217 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160217 |
|
26N | No opposition filed |
Effective date: 20161118 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160217 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160517 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160217 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20160731 Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20160731 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: MM4A |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 7 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20160708 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20160708 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: HU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO Effective date: 20110708 Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160217 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160217 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160217 Ref country code: MT Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20160731 Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160217 Ref country code: MK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160217 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 8 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20160217 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20190625 Year of fee payment: 9 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20190703 Year of fee payment: 9 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20200611 Year of fee payment: 10 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R119 Ref document number: 602011023329 Country of ref document: DE |
|
GBPC | Gb: european patent ceased through non-payment of renewal fee |
Effective date: 20200708 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GB Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200708 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20210202 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: FR Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20210731 |