WO2017157206A1 - 云数据中心互联方法及装置 - Google Patents

云数据中心互联方法及装置 Download PDF

Info

Publication number
WO2017157206A1
WO2017157206A1 PCT/CN2017/075871 CN2017075871W WO2017157206A1 WO 2017157206 A1 WO2017157206 A1 WO 2017157206A1 CN 2017075871 W CN2017075871 W CN 2017075871W WO 2017157206 A1 WO2017157206 A1 WO 2017157206A1
Authority
WO
WIPO (PCT)
Prior art keywords
cloud data
data center
tunnel
communicated
information
Prior art date
Application number
PCT/CN2017/075871
Other languages
English (en)
French (fr)
Inventor
周蕙菁
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2017157206A1 publication Critical patent/WO2017157206A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/40Network security protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4633Interconnection of networks using encapsulation techniques, e.g. tunneling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing

Definitions

  • the present disclosure relates to the field of network communications, and in particular, to a cloud data center interconnection method and apparatus.
  • SDN Software Defined Network
  • SDN is a new type of network architecture that advocates three layers of separation of services, control and forwarding.
  • SDN is an implementation method of network virtualization. It supports network abstraction, realizes network intelligent control, flexible service scheduling, and accelerates the opening of network capabilities. It is an important support technology for operators to transform into the "Internet +" era.
  • OpenFlow separates the network device control plane from the data plane, thereby enabling flexible control of network traffic and making the network more intelligent as a pipeline.
  • DCs Data Centers
  • DCI Data Center
  • Internet data center internet
  • the DCI controller is based on PCEP (Path Computation Element Communication Protocol), IS-IS (Intermediate system to intermediate system), and BGP (Border Gateway Protocol). Border Gateway Protocol), Netconf implements centralized calculation of DCI network, intelligent scheduling of traffic, and on-demand real-time distribution of bandwidth.
  • PCEP Pulth Computation Element Communication Protocol
  • IS-IS Intermediate system to intermediate system
  • BGP Border Gateway Protocol
  • Netconf implements centralized calculation of DCI network, intelligent scheduling of traffic, and on-demand real-time distribution of bandwidth.
  • this process requires the upper layer arranger to send information through multiple interfaces. If the devices used by the two interconnected DCs belong to the same manufacturer and the specifications are the same, that is, the interfaces of the two DCs match, then the interconnection of the two DCs is naturally not There are problems, but most of the devices used by existing DCs belong to different manufacturers. The interfaces of these devices are not standardized.
  • interconnection methods are not suitable for DC equipment produced by different manufacturers.
  • the interconnection between the two DCs can only be performed through a proprietary protocol, and the private protocol always has a large limitation.
  • the use of the proprietary protocol has low versatility and a narrow application range.
  • the main technical problem to be solved by the present disclosure is to solve the technical problem that the cloud data center using different specifications devices in the prior art can only be interconnected through a proprietary protocol.
  • a cloud data center interconnection method including:
  • acquiring the MAC information of the cloud data center to be communicated by using the BGP neighboring channel includes: acquiring MAC information of the cloud data center to be communicated according to an Ethernet virtual private network protocol.
  • the obtaining port information of the local cloud data center used for communication with the cloud data center to be communicated includes: acquiring, according to BGP neighbor information, communication with the cloud data center to be communicated Port information of the local cloud data center used.
  • the obtained MAC information when the cloud data center to be communicated is in the same network segment as the local cloud data center, the obtained MAC information includes: an IP address of the cloud data center to be communicated, MAC address, VNI number, and tunnel endpoint IP;
  • the acquired MAC information includes: an IP address, a MAC address, a VNI number, a tunnel endpoint IP, and The MAC address to be reached by the next station of the packet.
  • the establishing a tunnel according to the acquired MAC information includes any one of the following four types:
  • sending the forwarding table to the gateway device includes:
  • the Open Flow protocol is extended, and the extended Open flow protocol is used to operate the tunnel encapsulation
  • the extended Open flow protocol when the tunnel between the local cloud data center gateway device and the cloud data center gateway device to be communicated is a VXLAN tunnel, the extended Open flow protocol includes:
  • the present disclosure also provides a cloud data center interconnection device, including:
  • the BGP neighbor establishment module is configured to establish a BGP neighbor channel with the cloud data center to be communicated, where the cloud data center to be communicated is a cloud data center that needs to communicate with the local cloud data center;
  • the MAC information acquiring module is configured to acquire the MAC information of the cloud data center to be communicated by using the BGP neighbor channel;
  • a tunnel establishing module configured to establish a tunnel between the local cloud data center gateway device and the cloud data center gateway device to be communicated according to the obtained MAC information
  • a forwarding table generating module configured to learn the obtained MAC information and obtain port information of the local cloud data center used for communication with the cloud data center to be communicated, and integrate the MAC information and the port information to generate a switch Publish
  • the forwarding table delivery module is configured to send the forwarding table to the local cloud data center gateway device, so that the gateway device uses the tunnel to communicate with the to-be-communicated cloud data center according to the forwarding table.
  • the MAC information acquiring module acquires MAC information of a cloud data center to be communicated according to an Ethernet virtual private network protocol.
  • the forwarding table generating module acquires port information of the local cloud data center used for communication with the cloud data center to be communicated according to the BGP neighbor information.
  • the MAC information acquiring module includes:
  • the first obtaining sub-module is configured to: when the cloud data center to be communicated is in the same network segment as the local cloud data center, the obtained MAC information includes: an IP address and a MAC address of the cloud data center to be communicated , VNI number and tunnel endpoint IP;
  • a second obtaining sub-module configured to: when the cloud data center to be communicated and the local cloud data center are in different network segments, the obtained MAC information includes: an IP address and a MAC address of the cloud data center to be communicated , VNI number, tunnel endpoint IP, and the MAC address to be reached by the next station of the packet.
  • the tunnel establishment module includes at least one of the following four types:
  • VXLAN tunnel establishing submodule configured to establish a VXLAN tunnel according to the obtained MAC information
  • the GRE tunnel establishment submodule is configured to establish a GRE tunnel according to the obtained MAC information.
  • a PBB tunnel establishing submodule configured to establish a PBB tunnel according to the obtained MAC information
  • the MPLS tunnel establishment submodule is configured to establish an MPLS tunnel according to the obtained MAC information.
  • the forwarding table delivery module when the local cloud data center internal device belongs to different specifications, includes:
  • the protocol extension module is configured to extend the Open flow protocol, and the extended Open flow protocol is used to operate the tunnel encapsulation;
  • the delivery module is configured to send the forwarding table to the gateway device according to the extended Open flow protocol.
  • the protocol extension module when the tunnel establishment module includes a VXLAN tunnel establishment submodule, includes:
  • the first expansion submodule is configured to insert a new VXLAN header in front of the IP header, and pop the outermost VXLAN header;
  • a second extension submodule configured to set a tunnel ID to set a VXLAN network identifier in the outermost VXLAN header
  • the third extension submodule is configured to be inserted into the outer IP header of the VXLAN tunnel, and the outer IP header of the VXLAN tunnel is popped up;
  • the fourth extension submodule is configured to be inserted into the outer MAC header of the VXLAN tunnel, and the outer MAC header of the VXLAN tunnel is popped up.
  • the data center gateway device enables the local cloud data center gateway device to communicate with the to-be-communicated cloud data center gateway device according to the established tunnel and the obtained forwarding table.
  • the cloud data center interconnection method proposed by the present disclosure not only achieves the loose coupling or non-coupling effect between the cloud data centers, but also realizes the cloud data center using different specification devices without using any proprietary protocol in the process of establishing the interconnection. Interconnection has improved the versatility of the way cloud data centers are interconnected.
  • FIG. 1 is a flowchart of a method for interconnecting cloud data centers according to Embodiment 1 of the present disclosure
  • FIG. 2 is a schematic diagram of a cloud data center interconnection apparatus according to Embodiment 2 of the present disclosure
  • FIG. 3 is a schematic diagram of a MAC information acquiring module in FIG. 2;
  • FIG. 4 is a schematic diagram of a tunnel establishment module of FIG. 2;
  • Figure 5 is a schematic diagram of the forwarding table delivery module of Figure 2;
  • FIG. 6 is a schematic diagram of a protocol extension module of FIG. 5;
  • FIG. 7 is a flowchart of a method for interconnecting cloud data centers according to Embodiment 3 of the present disclosure.
  • FIG. 8 is a flowchart of a method for interconnecting cloud data centers according to Embodiment 4 of the present disclosure.
  • the idea of the present disclosure is to establish a tunnel by acquiring the MAC information of the cloud data center to be communicated, and learn the acquired MAC information, and integrate the BGP neighbor information to generate the gateway device in the local cloud data center and the cloud data center gateway to be communicated.
  • the device forwards the forwarding table, and finally passes the forwarding table to the local cloud data center gateway device, so that the local cloud data center gateway device communicates with the to-be-communicated cloud data center gateway device according to the established tunnel and the obtained forwarding table.
  • the loosely coupled or uncoupled effects of devices in the cloud data center enable the interconnection of cloud data centers using devices of different specifications.
  • Embodiment 1 is a diagrammatic representation of Embodiment 1:
  • This embodiment provides a cloud data center interconnection method, please refer to FIG. 1:
  • the cloud data center to be communicated here refers to a cloud data center that needs to be interconnected with the local cloud data center.
  • the EVPN protocol is a standard protocol.
  • EVPN's integrated services, higher network efficiency, better design flexibility, and greater control capabilities enable operators to meet emerging new demands in their networks with a single VPN technology, such as integrated L2 And L3 services, simplified topology overlay technology, tunneling services, cloud and virtualization services and data center interconnection on the IP architecture.
  • the MAC information includes the IP address, MAC address, VNI number, and tunnel endpoint IP of the cloud data center to be communicated.
  • the obtained MAC information generally includes the IP address, MAC address, VNI number, and tunnel endpoint IP of the cloud data center to be communicated. And the MAC address to be reached by the next station of the packet.
  • the cloud data center to be communicated with the local cloud data center overlaps with some network segments, that is, part of the network segment is in the same network segment.
  • the cloud data center to be communicated with the local cloud data center needs to establish a second layer.
  • the interconnection must establish a three-layer interconnection.
  • the acquired MAC information also includes the IP address, MAC address, VNI number, tunnel endpoint IP of the cloud data center to be communicated, and the MAC address to be reached by the next station of the data packet.
  • the MP-BGP protocol is extended on the control plane to implement the EVPN technology, and the data plane supports multiple tunnels such as MPLS, PBB, and VXLAN. Therefore, in this embodiment, according to the acquired MAC address,
  • a VXLAN tunnel a GRE tunnel, a PBB tunnel, and an MPLS tunnel can be established.
  • the MPLS technology is relatively mature, the RFC standard has been formed, so it is the most widely used.
  • VXLAN is still a draft, VXLAN technology may become a mainstream trend in the future due to its advantages of supporting a large number of tenants and easy maintenance. Therefore, in this embodiment, the established tunnel belongs to a VXLAN tunnel.
  • the generation of the forwarding table not only needs to obtain the port information of the local cloud data center from the BGP neighbor information according to the acquired MAC information, so as to know which one of the local cloud data centers should be used when communicating between the two cloud data centers.
  • the port receives and sends data packets.
  • the process of generating the forwarding table does not have strict timing constraints. You can establish a tunnel first, or you can Into the forwarding table.
  • S105 Send the forwarding table to the local cloud data center gateway device, so that the gateway device uses the tunnel to communicate with the cloud data center to be communicated according to the forwarding table.
  • the OpenFlow protocol may be extended first, so that the extended OpenFlow protocol is used to operate the tunnel encapsulation, and then the forwarding table is sent according to the extended Open flow protocol.
  • the forwarding table is forwarded between devices of different specifications, and since the established tunnel is a VXLAN tunnel, So the following extensions are made to the Open flow protocol:
  • the cloud data center to be communicated also needs to obtain the MAC information of the local cloud data center, learn the acquired MAC information, and then combine the BGP neighbor information to generate a forwarding table for communicating with the local cloud data center, and send the forwarding table.
  • the gateway device but because the devices inside the cloud data center to be communicated may come from the same manufacturer, they can also come from different vendors. Therefore, the interfaces of their internal devices may or may not match. Therefore, in this embodiment, the sending process of the forwarding table in the cloud data center to be communicated does not need to be consistent with the local cloud data center.
  • the tenant in the local cloud data center sends the data packet to the gateway device.
  • the gateway device uses the tunnel between the cloud data center to be communicated according to the forwarding table, for example, VXLAN.
  • the tunnel transmits the packet to the gateway device of the cloud data center to be communicated.
  • the gateway device to be communicated to the cloud data center transmits the data packet to the corresponding tenant.
  • Embodiment 2 is a diagrammatic representation of Embodiment 1:
  • the cloud data center interconnection device 20 includes a BGP neighbor establishment module 201, a MAC information acquisition module 202, a tunnel establishment module 203, a forwarding table generation module 204, and a forwarding table. Delivery module 205.
  • the BGP neighbor establishing module 201 is configured to establish a BGP neighbor channel with the cloud data center to be communicated.
  • the cloud data center to be communicated here refers to a cloud data center that needs to be interconnected with the local cloud data center.
  • the MAC information obtaining module 202 is configured to obtain the MAC information of the cloud data center to be communicated by using the BGP neighboring channel.
  • the cloud data center to be communicated refers to the cloud data center that needs to be interconnected with the local cloud data center.
  • the MAC information acquisition module 202 acquires the MAC information of the cloud data center to be communicated, it is performed according to the EVPN (Ethernet Virtual Private Network) protocol, which is a standard protocol.
  • EVPN Ethernet Virtual Private Network
  • the key advantages of EVPN integrated services, higher network efficiency, better design flexibility, and greater control capabilities enable operators to A single VPN technology to meet emerging new demands in its network, such as integrated L2 and L3 services, simplified topology overlay technology, tunneling services, cloud and virtualization services and data center interconnection on the IP architecture.
  • the MAC information obtaining module 202 includes a first obtaining sub-module 2021 and a second obtaining sub-module 2022.
  • the first obtaining submodule 2021 acquires an IP address, a MAC address, a VNI number, and a tunnel endpoint including the cloud data center to be communicated.
  • MAC information such as IP.
  • the second obtaining submodule 2022 obtains the MAC information of the cloud data center to be communicated, and the second obtaining submodule 2022
  • the obtained MAC information generally includes the IP address, MAC address, VNI number, tunnel endpoint IP of the cloud data center to be communicated, and the MAC address to be reached by the next station of the data packet.
  • the cloud data center to be communicated overlaps with the local cloud data center, that is, part of the network segment is in the same network segment, the communication cloud data center and the local cloud data center need to establish a second layer of interconnection. A three-layer interconnection is established.
  • the second acquisition sub-module 2022 also acquires the MAC information of the cloud data center to be communicated. Like the MAC information acquired when establishing the three-layer interconnection, the IP address of the cloud data center to be communicated is also obtained. MAC address, VNI number, tunnel endpoint IP, and the MAC address to be reached by the next station of the packet.
  • the tunnel establishment module 203 establishes a tunnel according to the acquired MAC information. Since the EVPN separates the control plane and the data plane, the MP-BGP protocol is extended on the control plane to implement the EVPN technology, and the data plane supports multiple tunnels such as MPLS, PBB, and VXLAN. Therefore, in this embodiment, the tunnel establishment module 203 Can include at least one of the following four, please refer to Figure 4:
  • the VXLAN tunnel establishment sub-module 2031 is configured to establish a VXLAN tunnel according to the acquired MAC information.
  • the GRE tunnel establishment sub-module 2032 is configured to establish a GRE tunnel according to the acquired MAC information
  • the PBB tunnel establishment sub-module 2033 is configured to obtain the MAC information according to the obtained
  • An MPLS tunnel establishment sub-module 2034 is configured to establish an MPLS tunnel according to the acquired MAC information.
  • the MPLS tunnel establishment sub-module 2034 is the most widely used.
  • the tunnel establishment module 203 includes a VXLAN tunnel finder.
  • the module 2031 is configured to establish a VXLAN type tunnel.
  • the forwarding table generating module 204 is configured to learn the acquired MAC information and obtain BGP neighbor information, and integrate and generate a forwarding table.
  • the forwarding table generating module 204 generates the forwarding table not only according to the acquired MAC information, but also the BGP neighbor information, that is, the port information of the local cloud data center is obtained from the BGP neighbor information, so as to communicate in the two cloud data centers. It is known which port of the local cloud data center should receive and send data packets.
  • the tunnel establishment module 203 establishes a tunnel according to the acquired MAC information, and the forwarding table generation module 204 learns the acquired MAC information and obtains the BGP neighbor information, and the two processes of generating the forwarding table are not strict.
  • the timing limit can be established first, or it can be a forwarding table.
  • the forwarding table delivery module 205 sends the forwarding table to the gateway device of the local cloud data center, so that the local gateway device communicates with the cloud data center to be communicated according to the forwarding table and the tunnel.
  • the forwarding table delivery module 205 sends the generated forwarding table to the gateway device of the local cloud data center, it is necessary to consider whether the internal devices of the local cloud data center belong to the same specification: when the controller and the switch in the local cloud data center belong to the switch In the same specification, the forwarding table delivery module 205 can directly send in the existing manner.
  • the OpenFlow protocol may be extended first, so that the extended OpenFlow protocol is used to operate the tunnel encapsulation, and then the forwarding table is sent according to the extended Open flow protocol. Give the gateway device. Referring to FIG.
  • the forwarding table delivery module 205 includes a protocol extension module 2051 for extending the OpenFlow protocol, and a delivery module 2052 for transmitting the forwarding table to the gateway device according to the extended Open flow protocol.
  • the protocol extension module 2051 is configured to extend the Open flow because the Open flow itself is a protocol that supports the extension.
  • the protocol extension module 2051 in order to implement forwarding between the devices of different specifications, includes The first extension submodule 20511, the second extension submodule 20512, the third extension submodule 20513, and the fourth extension submodule 20514 are as shown in FIG. 6:
  • the first extension sub-module 20511 inserts a new VXLAN header in front of the IP header to pop up the outermost VXLAN header; the second extension sub-module 20512 sets the tunnel ID, and sets the VXLAN network identifier in the outermost VXLAN header; The sub-module 20513 is inserted into the outer IP header of the VXLAN tunnel, and the outer IP header of the VXLAN tunnel is popped up. The fourth extended sub-module 20514 is used to insert the outer MAC header of the VXLAN tunnel, and the outer MAC header of the VXLAN tunnel is popped up.
  • the cloud data center to be communicated also needs to obtain the MAC information of the local cloud data center, learn the acquired MAC information, and then combine the BGP neighbor information to generate a forwarding table for communicating with the local cloud data center, and send the forwarding table.
  • the gateway device but because the devices inside the cloud data center to be communicated may come from the same manufacturer, they can also come from different vendors. Therefore, the interfaces of their internal devices may or may not match. Therefore, in this embodiment, the sending process of the forwarding table in the cloud data center to be communicated does not need to be consistent with the local cloud data center.
  • the tenant in the local cloud data center sends the data packet to the gateway device.
  • the gateway device uses the tunnel between the cloud data center to be communicated according to the forwarding table, for example, VXLAN.
  • the tunnel transmits the packet to the gateway device of the cloud data center to be communicated.
  • the gateway device to be communicated to the cloud data center transmits the data packet to the corresponding tenant.
  • Embodiment 3 is a diagrammatic representation of Embodiment 3
  • This embodiment further describes the interconnection of two cloud data centers in the same network segment. Please combine Figure 7:
  • the user sets up a network environment on the same network segment for two cloud data centers that need to be interconnected:
  • the user creates a network segment of 192.168.2.0/24 and divides it into two resource pools.
  • the two resource pools are in the data center DC2 and DC1 respectively, for example, 192.168.2.1 to 192.168.2.127 in DC2, 192.168.2.128 to 192.168. 2.254 is in DC1.
  • the orchestrator notifies the controllers of DC2 and DC1 of the address range of the resource pool.
  • the user creates two virtual machines Host21 and Host31 with IP addresses of 192.168.2.2 and 192.168.2.203. These two virtual machines fall in DC2 and DC1 respectively. Therefore, the two virtual machines in DC2 and DC1 have two layers of interconnection requirements.
  • the physical port information port23 for the interconnection with GW1 and the vtep-ip address (tunnel endpoint IP address) of the gateway GW2 on the gateway GW2 are configured by the controller 2.
  • the orchestrator notifies the controller 2 to create a Layer 2 interconnected virtual port on the created gateway GW2:
  • the RESTFUL interface provided by the controller includes the following information: the global tenant ID (tenant-id), the RD corresponding to the tenant, and the network segment information (suid or subnet/mask).
  • the controller 2 assigns a VNI number to the subnet segment, that is, a VXLAN tunnel number;
  • the controller 2 creates a Layer 2 virtual port of the subtunnel type on the port port 21 of the VXLAN GW2, and the port type is identified as an external interconnect port.
  • S701, DC2 and DC1 establish BGP neighbors and negotiate to support EVPN.
  • the controller 2 of S702 and DC2 acquires the MAC information of DC1 and transmits its own MAC information to DC1.
  • the MAC information sent by controller 2 of DC2 is shown in Table 1:
  • the MAC address of DC2 controller 2 receives DC1 as shown in Table 2:
  • the controller 2 of S703 and DC2 integrates the port information of the local end in the acquired BGP neighbor information with the MAC information of the learned DC1, and the integrated information obtained is shown in Table 3:
  • Host21 When Host11 and Host21 communicate, Host21 requests to obtain the mac information of Host11.
  • the switch vSwitch1 receives the arp request and sends the arp request to the controller 2.
  • the controller 2 finds the mac information of the Host11 according to the learned MAC information. Respond to Host21's arp response.
  • the controller 2 of S704 and DC2 generates a forwarding table according to the BGP neighbor information and the MAC information of the DC1.
  • the formed forwarding table is substantially in the form of an Open flow flow table.
  • the controller 2 sends a flow table to the gateway GW2 through the switch vSwtich1, and the matching destination MAC address is mac12. Layer switching, draining to GW2; the outgoing interface on GW2 is Port21, which encapsulates VXLAN.
  • the controller 2 of S705 and DC2 transmits the Open flow flow table to the gateway GW2.
  • the flow table is delivered by extending the OpenFlow protocol.
  • the extended rules of the Open flow protocol are as follows:
  • Push-Tag/Pop-Tag Action - Push VXLAN header (insert a new VXLAN header in front of the IP header), Pop VXLAN header (pop-out VXLAN header).
  • Set-Field Set Tunnel ID action, used to set the VXLAN network identifier in the outermost VXLAN header.
  • Push-Tag/Pop-Tag Action - Push VTEP-IP header (insert the outer IP header of the VXLAN tunnel) and Pop VTEP-IP header (the outer IP header of the VXLAN tunnel is popped up).
  • Push-Tag/Pop-Tag Action - Push VTEP-MAC header (the outer MAC header inserted into the VXLAN tunnel) and Pop VTEP-MAC header (the outer MAC header of the VXLAN tunnel is popped up).
  • Embodiment 4 is a diagrammatic representation of Embodiment 4:
  • This embodiment further describes the interconnection of two cloud data centers in different network segments. Please combine Figure 8:
  • the user sets up a network environment on different network segments for two cloud data centers that need to be interconnected.
  • the user creates a network segment 192.168.1.0/24.
  • the address of the network segment is in the resource pool of DC1.
  • the address range is 192.168.2.128 to 192.168.2.254.
  • the user creates a network segment of 192.168.3.0/24.
  • the network segment addresses are all in the resource pool of DC2.
  • the following describes the Layer 3 of Host21 with IP address 192.168.3.2 and Host21 and DC2 with IP address 192.168.3.2 in DC1. interconnected.
  • DC2 is taken as an example for explanation.
  • the process DC2 performed in DC1 is substantially similar:
  • the two virtual machines in DC2 and DC1 have three layers of interconnection requirements.
  • the controller 2 collects physical port information (port 21) used for interconnection with GW1 on the gateway GW2, and the vtep-ip address of the gateway GW2, and the controller 1 is similar.
  • the orchestrator notifies controller 2 to create a three-layer interconnect virtual port on the creation gateway GW2:
  • the RESTFUL interface provided by the controller 2 includes the following information: a global tenant ID (tenant-id), a tenant corresponding RD, and a three-layer interconnection interface IP.
  • the controller 2 assigns the VNI number to the user, that is, the VXLAN tunnel number
  • the controller 2 creates a tunnel type three-layer virtual port on the port port 21 of the VXLAN GW2, and the IP address is l3ip2.
  • the controller 2 of S802 and DC2 acquires the MAC information of DC1 and transmits its own MAC information to DC1.
  • the MAC information sent by controller 2 of DC2 is shown in Table 4:
  • the routing information sent by controller 2 is shown in Table 5:
  • the MAC address of DC2 controller 2 receives DC1 as shown in Table 6:
  • the routing information of DC2 controller 2 receiving DC1 is shown in Table 7:
  • the controller 2 of S803 and DC2 integrates the acquired BGP neighbor information with the MAC information of the learned DC1, and the obtained integrated information is as shown in Table 8:
  • the controller 2 of S804 and DC2 integrates the acquired BGP neighbor information with the learned routing information of DC1, and the obtained integrated information is as shown in Table 9:
  • Host22 requests to obtain the gateway mac information.
  • the switch vSwitch1 receives the arp request and sends the arp request to the controller 2.
  • the controller 2 finds the gateway mac information as sysmac2 according to the learned MAC information, and the controller 2 Respond to Host21's arp response.
  • the controller 2 of S805 and DC2 generates a forwarding table according to the BGP neighbor information and the MAC information of DC1.
  • the formed forwarding table is substantially in the form of an Open flow flow table.
  • the controller 2 sends a flow table to the gateway GW2 through the switch vSwtich1, and the matching destination MAC address is the MAC12 to perform the Layer 2 switching and is diverted to the GW2; the outgoing interface on the GW2 is the Port 21, and the VXLAN is encapsulated.
  • the controller 2 of S806 and DC2 transmits the Open flow flow table to the gateway GW2.
  • the flow table is delivered by extending the OpenFlow protocol.
  • the extended rules of the OpenFlow protocol are as shown in the third embodiment.
  • modules or steps of the present disclosure can be implemented by a general computing device, which can be concentrated on a single computing device or distributed over a network composed of multiple computing devices. Alternatively, they may be implemented by program code executable by the computing device, such that they may be stored in a storage medium (ROM/RAM, disk, optical disk) by a computing device, and in some cases The steps shown or described may be performed in a different order than that herein, or they may be separately fabricated into individual integrated circuit modules, or a plurality of the modules or steps may be implemented as a single integrated circuit module. Therefore, the present disclosure is not limited to any specific combination of hardware and software.
  • the present disclosure is applicable to the field of network communication, and is used to achieve the loose coupling or non-coupling effect between devices in the cloud data center, and the cloud data center using different specification devices is interconnected without using any proprietary protocol in establishing the interconnection process. Improve the versatility of cloud data center interconnection.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本公开公开的云数据中心互联方法及装置,通过让本端云数据中心与待通信云数据中心建立BGP邻居,获取待通信云数据中心的MAC信息并建立隧道,同时学习MAC信息,获得本端云数据中心的端口信息,整合二者生成供本端云数据中心内网关设备与待通信云数据中心网关设备进行通信的转发表,最后将转发表传递给本端云数据中心网关设备,使本端云数据中心网关设备与待通信云数据中心网关设备根据建立的隧道以及获得的转发表进行通信。本公开提出的云数据中心互联方法不仅达到了云数据中心间设备松耦合或无耦合的效果,而且在建立互联的过程中并未使用任何私有协议就让使用不同规格设备的云数据中心实现了互联,提高了云数据中心互联方式的通用性。

Description

云数据中心互联方法及装置 技术领域
本公开涉及网络通信领域,尤其涉及一种云数据中心互联方法及装置。
背景技术
SDN(Software Defined Network,软件定义网络)是一种新型网络架构,它倡导业务、控制与转发三层分离。SDN是网络虚拟化的一种实现方式,它支持网络抽象,实现网络智能控制、业务灵活调度,加速网络能力开放,是运营商向“互联网+”时代转型的重要支持技术。其核心技术OpenFlow通过将网络设备控制面与数据面分离开来,从而实现了网络流量的灵活控制,使网络作为管道变得更加智能。随着云计算的迅速崛起,用户越来越多地期望能够实现SDN控制器管理域内的多个DC(Data Center,数据中心)的互联,形成互联多个数据中心的网络,即DCI(Data Center Internet,数据中心互联网)。DCI可以整合用户各个DC中丰富的数据资源优势,为云业务提供高带宽、低时延的业务保障,给用户带来高质量的业务体验。
在DC互联的过程中,DCI控制器根据PCEP(Path Computation Element Communication Protocol,路径计算元件通信协议)、IS-IS(Intermediate system to intermediate system,中间系统到中间系统协议)、BGP(Border Gateway Protocol,边界网关协议)、Netconf实现DCI网络的集中算路、流量的智能调度以及带宽的按需即时发放不调整等。但这个过程需要上层编排器通过多种接口下发信息,若两个需要互联的DC使用的设备属于同一厂家生产,规格相同,即两个DC的接口匹配,那这两个DC的互联自然不存在问题,但是现有的DC使用的设备大多属于不同厂家,这些设备的接口也并不是标准统一的,因此,在这种情况下依照现有的互联方式就会存在障碍,也就是说,现有的互联方式不适合异厂家生产的DC设备。这种情况下,两个DC之间要进行互联就只能通过私有协议进行,而私有协议总是有很大的局限性,例如使用私有协议互联通用性低,应用范围窄。
发明内容
本公开要解决的主要技术问题是:解决现有技术中使用不同规格设备的云数据中心只能通过私有协议进行互联的技术问题。
为解决上述技术问题,本公开提供一种云数据中心互联方法,包括:
与待通信云数据中心建立BGP邻居通道,所述待通信云数据中心为本端云数据中心需要进行通信的云数据中心;
利用所述BGP邻居通道获取所述待通信云数据中心的MAC信息;
根据获取的所述MAC信息建立所述本端云数据中心网关设备与所述待通信云数据中 心网关设备间的隧道;
学习获取的所述MAC信息并获取与所述待通信云数据中心通信所用的所述本端云数据中心的端口信息,整合所述MAC信息与所述端口信息生成转发表;
将所述转发表发送给所述本端云数据中心网关设备,使所述网关设备根据所述转发表利用所述隧道与所述待通信云数据中心进行通信。
在本公开的一种实施例中,利用所述BGP邻居通道获取所述待通信云数据中心的MAC信息包括:根据以太网虚拟专用网络协议获取所述待通信云数据中心的MAC信息。
在本公开的一种实施例中,所述获取与所述待通信云数据中心通信所用的所述本端云数据中心的端口信息包括:根据BGP邻居信息获取与所述待通信云数据中心通信所用的所述本端云数据中心的端口信息。
在本公开的一种实施例中,当所述待通信云数据中心与所述本端云数据中心处于同一网段,获取的所述MAC信息包括:所述待通信云数据中心的IP地址、MAC地址、VNI号以及隧道端点IP;
当所述待通信云数据中心与所述本端云数据中心处于不同网段,获取的所述MAC信息包括:所述待通信云数据中心的IP地址、MAC地址、VNI号、隧道端点IP以及数据包下一站要到达的MAC地址。
在本公开的一种实施例中,所述根据获取的所述MAC信息建立隧道包括以下四种中的任意一种:
根据获取的所述MAC信息建立VXLAN隧道;
根据获取的所述MAC信息建立GRE隧道;
根据获取的所述MAC信息建立PBB隧道;
根据获取的所述MAC信息建立MPLS隧道。
在本公开的一种实施例中,当所述本端云数据中心内部设备属于不同规格,则将所述转发表发送给网关设备包括:
扩展Open flow协议,扩展后的Open flow协议用于操作隧道封装;
根据扩展后的Open flow协议将所述转发表发送给所述网关设备。
在本公开的一种实施例中,当所述本端云数据中心网关设备与所述待通信云数据中心网关设备间的隧道为VXLAN隧道时,扩展Open flow协议包括:
在IP头前插入一个新的VXLAN头,弹出最外层的VXLAN头;
设置隧道ID,设置最外层VXLAN头中的VXLAN网络标识;
插入VXLAN隧道的外层IP头,弹出VXLAN隧道的外层IP头;
插入VXLAN隧道的外层MAC头,弹出VXLAN隧道的外层MAC头。
本公开还提供一种云数据中心互联装置,包括:
BGP邻居建立模块,设置为与待通信云数据中心建立BGP邻居通道,所述待通信云数据中心为本端云数据中心需要进行通信的云数据中心;
MAC信息获取模块,设置为利用所述BGP邻居通道获取所述待通信云数据中心的MAC信息;
隧道建立模块,设置为根据获取的所述MAC信息建立所述本端云数据中心网关设备与所述待通信云数据中心网关设备间的隧道;
转发表生成模块,设置为学习获取的所述MAC信息并获取与所述待通信云数据中心通信所用的所述本端云数据中心的端口信息,整合所述MAC信息与所述端口信息生成转发表;
转发表传递模块,设置为将所述转发表发送给所述本端云数据中心网关设备,使所述网关设备根据所述转发表利用所述隧道与所述待通信云数据中心进行通信。
在本公开的一种实施例中,所述MAC信息获取模块根据以太网虚拟专用网络协议获取待通信云数据中心的MAC信息。
在本公开的一种实施例中,所述转发表生成模块根据BGP邻居信息获取与所述待通信云数据中心通信所用的所述本端云数据中心的端口信息。
在本公开的一种实施例中,所述MAC信息获取模块包括:
第一获取子模块,设置为当所述待通信云数据中心与所述本端云数据中心处于同一网段,获取的所述MAC信息包括:所述待通信云数据中心的IP地址、MAC地址、VNI号以及隧道端点IP;
第二获取子模块,设置为当所述待通信云数据中心与所述本端云数据中心处于不同网段,获取的所述MAC信息包括:所述待通信云数据中心的IP地址、MAC地址、VNI号、隧道端点IP以及数据包下一站要到达的MAC地址。
在本公开的一种实施例中,所述隧道建立模块包括以下四种中的至少一种:
VXLAN隧道建立子模块,设置为根据获取的所述MAC信息建立VXLAN隧道;
GRE隧道建立子模块,设置为根据获取的所述MAC信息建立GRE隧道;
PBB隧道建立子模块,设置为根据获取的所述MAC信息建立PBB隧道;
MPLS隧道建立子模块,设置为根据获取的所述MAC信息建立MPLS隧道。
在本公开的一种实施例中,当所述本端云数据中心内部设备属于不同规格时,所述转发表传递模块包括:
协议扩展模块,设置为扩展Open flow协议,扩展后的Open flow协议用于操作隧道封装;
传递模块,设置为根据扩展后的Open flow协议将所述转发表发送给所述网关设备。
在本公开的一种实施例中,当所述隧道建立模块包括VXLAN隧道建立子模块时,协议扩展模块包括:
第一扩展子模块,设置为在IP头前插入一个新的VXLAN头,弹出最外层的VXLAN头;
第二扩展子模块,设置为设置隧道ID设置最外层VXLAN头中的VXLAN网络标识;
第三扩展子模块,设置为插入VXLAN隧道的外层IP头,弹出VXLAN隧道的外层IP头;
第四扩展子模块,设置为插入VXLAN隧道的外层MAC头,弹出VXLAN隧道的外层MAC头。
本公开的有益效果是:
通过让本端云数据中心与待通信云数据中心建立BGP邻居通道,获取待通信云数据中心的MAC信息建立隧道,并对获取到的MAC信息进行学习,获得与待通信云数据中心间通信时,本端云数据中心所用的端口信息,整合MAC信息与端口信息生成供本端云数据中心内网关设备与待通信云数据中心网关设备进行通信的转发表,最后将转发表传递给本端云数据中心网关设备,使本端云数据中心网关设备与待通信云数据中心网关设备根据建立的隧道以及获得的转发表进行通信。本公开提出的云数据中心互联方法不仅达到了云数据中心间设备松耦合或无耦合的效果,而且在建立互联的过程中并未使用任何私有协议就让使用不同规格设备的云数据中心实现了互联,提高了云数据中心互联方式的通用性。
附图说明
图1为本公开实施例一提供的云数据中心互联方法的一种流程图;
图2为本公开实施例二提供的云数据中心互联装置的一种示意图;
图3为图2中MAC信息获取模块的一种示意图;
图4为图2中隧道建立模块的一种示意图;
图5为图2中转发表传递模块的一种示意图;
图6为图5中协议扩展模块的一种示意图;
图7为本公开实施例三提供的云数据中心互联方法的一种流程图;
图8为本公开实施例四提供的云数据中心互联方法的一种流程图。
具体实施方式
为了是本公开的优点和细节更加清楚,下面通过具体实施方式结合附图对本公开作进一步详细说明。
本公开的构思是:通过获取待通信云数据中心的MAC信息建立隧道,并对获取到的MAC信息进行学习,整合BGP邻居信息生成供本端云数据中心内网关设备与待通信云数据中心网关设备进行通信的转发表,最后将转发表传递给本端云数据中心网关设备,使本端云数据中心网关设备与待通信云数据中心网关设备根据建立的隧道以及获得的转发表进行通信,到达云数据中心间设备松耦合或无耦合的效果,让使用不同规格设备的云数据中心实现互联。
实施例一:
本实施例提供一种云数据中心互联方法,请参考图1:
S101、与待通信云数据中心建立BGP邻居通道。
这里所指的待通信云数据中心指的是需要与本端云数据中心进行互联的云数据中心。
S102、利用BGP邻居通道获取待通信云数据中心的MAC信息。
在本实施例中,获取待通信云数据中心的MAC信息时,根据EVPN(以太网虚拟专用网络)协议进行,EVPN协议是标准的协议。EVPN集成的服务、更高的网络效率、更好的设计灵活性、更强的控制能力这些关键优势使得运营商能够以单一的VPN技术满足其网络中不断出现的新需求,例如:集成的L2和L3服务、简化拓扑的叠加技术,在IP架构上以隧道方式提供业务、云和虚拟化服务以及数据中心互联。
在获取待通信云数据中心的MAC信息时,需要分不同情况进行考虑,如果待通信云数据中心与本端云数据中心处于同一网段,二者仅需要进行二层互联,这时,获取的MAC信息包括待通信云数据中心的IP地址、MAC地址、VNI号、隧道端点IP。当待通信云数据中心与本端云数据中心处于不同网段,二者需要进行三层互联时,获取的MAC信息一般包括待通信云数据中心的IP地址、MAC地址、VNI号、隧道端点IP以及数据包下一站要到达的MAC地址。还有一种情况是,待通信云数据中心与本端云数据中心有部分网段重叠,即部分处于同一网段,则这时候,待通信云数据中心与本端云数据中心既需要建立二层互联又要建立三层互联,这时候,获取的MAC信息也是包括待通信云数据中心的IP地址、MAC地址、VNI号、隧道端点IP以及数据包下一站要到达的MAC地址。
S103、根据获取的MAC信息建立本端云数据中心网关设备与待通信云数据中心网关设备间的隧道。
由于EVPN了分离控制面和数据面,其在控制面扩展MP-BGP协议实现了EVPN技术,让数据面支持MPLS、PBB及VXLAN等多种隧道,因此,在本实施例中,根据获取的MAC信息建立隧道的时候可以有多种选择:可以建立VXLAN隧道、GRE隧道、PBB隧道以及MPLS隧道。其中因为MPLS技术比较成熟,已经形成了RFC标准,所以应用最为广泛。虽然VXLAN目前仍为草案,但是由于其具有支持大量的租户、易于维护等优点,因此,VXLAN技术也可能成为未来的主流趋势,所以,在本实施例中,建立的隧道属于VXLAN隧道。
S104、学习获取的MAC信息并获取与待通信云数据中心通信所用的本端云数据中心的端口信息,整合MAC信息与端口信息生成转发表。
生成转发表不仅需要根据获取的MAC信息,还要从BGP邻居信息中获取到本端云数据中心的端口信息,以便在两个云数据中心进行通信的时候知道应该从本端云数据中心的哪个端口进行数据包的接收与发送。
本领域技术人员应该理解的是,根据获取的MAC信息建立隧道的过程与学习获取的MAC信息并获取与待通信云数据中心通信所用的本端云数据中心的端口信息,整合MAC信息与端口信息生成转发表的过程并没有严格的时序限定,可以先建立隧道,也可以先生 成转发表。
S105、将转发表发送给本端云数据中心网关设备,使网关设备根据转发表利用隧道与待通信云数据中心进行通信。
在将生成的转发表发送给本端云数据中心的网关设备时,需要考虑本端云数据中心内部设备是否属于同一规格:当本端云数据中心内的控制器与交换机属于同一规格时,可以直接按照现有的方式发送。当本端云数据中心内的控制器与交换机属于不同规格时,可以先对Open flow协议进行扩展,使扩展后的Open flow协议用于操作隧道封装,然后根据扩展的Open flow协议将转发表发送给网关设备。之所以选择对Open flow进行扩展,是由于Open flow本身就是一种支持扩展的协议,在本实施例中,为了实现转发表在不同规格设备之间进行转发,且由于建立的隧道为VXLAN隧道,所以对Open flow协议进行以下扩展:
在IP头前插入一个新的VXLAN头,弹出最外层的VXLAN头;设置隧道ID,设置最外层VXLAN头中的VXLAN网络标识;插入VXLAN隧道的外层IP头,弹出VXLAN隧道的外层IP头;插入VXLAN隧道的外层MAC头,弹出VXLAN隧道的外层MAC头。
待通信云数据中心也需要获取本端云数据中心的MAC信息,并对获取的MAC信息进行学习,然后结合BGP邻居信息生成其与本端云数据中心进行通信的转发表,并将转发表发送到其网关设备上,但由于待通信云数据中心内部的设备可能出自同一厂家,也能出自不同厂家,因此,其内部设备的接口可能匹配,也可能不匹配。所以毫无疑义的是,在本实施例中,待通信云数据中心内部进行转发表的发送过程并不需要与本端云数据中心进行的一致。
在建立互联之后,两个DC间可以进行正常通信,本端云数据中心内的租户将数据包发送至网关设备上,网关设备根据转发表,利用与待通信云数据中心间的隧道,例如VXLAN隧道将数据包传送至待通信云数据中心的网关设备上。待通信云数据中心的网关设备再将数据包传输给对应租户。
实施例二:
本实施例提供一种云数据中心互联装置,如图2所示,云数据中心互联装置20包括BGP邻居建立模块201、MAC信息获取模块202、隧道建立模块203、转发表生成模块204以及转发表传递模块205。
BGP邻居建立模块201用于与待通信云数据中心建立BGP邻居通道。这里所指的待通信云数据中心指的是需要与本端云数据中心进行互联的云数据中心。
MAC信息获取模块202用于利用BGP邻居通道获取待通信云数据中心的MAC信息,这里所指的待通信云数据中心指的是需要与本端云数据中心进行互联的云数据中心。
在本实施例中,MAC信息获取模块202获取待通信云数据中心的MAC信息时,根据EVPN(以太网虚拟专用网络)协议进行,EVPN协议是标准的协议。EVPN集成的服务、更高的网络效率、更好的设计灵活性、更强的控制能力这些关键优势使得运营商能够 以单一的VPN技术满足其网络中不断出现的新需求,例如:集成的L2和L3服务、简化拓扑的叠加技术,在IP架构上以隧道方式提供业务、云和虚拟化服务以及数据中心互联。
请结合图3,MAC信息获取模块202包括第一获取子模块2021和第二获取子模块2022,因为在获取待通信云数据中心的MAC信息时,需要分不同情况进行考虑,如果待通信云数据中心与本端云数据中心处于同一网段,二者仅需要进行二层互联,这时,由第一获取子模块2021获取包括待通信云数据中心的IP地址、MAC地址、VNI号、隧道端点IP等MAC信息。当待通信云数据中心与本端云数据中心处于不同网段,二者需要进行三层互联时,则由第二获取子模块2022去获取待通信云数据中心的MAC信息,二获取子模块2022获取的MAC信息一般包括待通信云数据中心的IP地址、MAC地址、VNI号、隧道端点IP以及数据包下一站要到达的MAC地址。还有一种情况是,待通信云数据中心与本端云数据中心有部分网段重叠,即部分处于同一网段,则待通信云数据中心与本端云数据中心既需要建立二层互联又要建立三层互联,这时候,也同样由第二获取子模块2022获取待通信云数据中心的MAC信息,与建立三层互联时获取的MAC信息一样,也是获取待通信云数据中心的IP地址、MAC地址、VNI号、隧道端点IP以及数据包下一站要到达的MAC地址。
隧道建立模块203根据获取的MAC信息建立隧道。由于EVPN了分离控制面和数据面,其在控制面扩展MP-BGP协议实现了EVPN技术,让数据面支持MPLS、PBB及VXLAN等多种隧道,因此,在本实施例中,隧道建立模块203可以包括以下四种中的至少一种,请结合图4:
VXLAN隧道建立子模块2031,用于根据获取的MAC信息建立VXLAN隧道;GRE隧道建立子模块2032,用于根据获取的MAC信息建立GRE隧道;PBB隧道建立子模块2033,用于根据获取的MAC信息建立PBB隧道;MPLS隧道建立子模块2034,用于根据获取的MAC信息建立MPLS隧道。其中因为MPLS技术比较成熟,已经形成了RFC标准,所以MPLS隧道建立子模块2034的应用最为广泛。虽然VXLAN目前仍为草案,但是由于其具有支持大量的租户、易于维护等优点,因此,VXLAN技术也可能成为未来的主流趋势,所以,在本实施例中,隧道建立模块203包括VXLAN隧道建立子模块2031,用于建立VXLAN类型的隧道。
转发表生成模块204用于学习获取的MAC信息并获取BGP邻居信息,整合生成转发表。
转发表生成模块204生成转发表不仅需要根据获取的MAC信息,还要结合BGP邻居信息,即从BGP邻居信息中获取到本端云数据中心的端口信息,以便在两个云数据中心进行通信的时候知道应该从本端云数据中心的哪个端口进行数据包的接收与发送。
本领域技术人员应该理解的是,隧道建立模块203根据获取的MAC信息建立隧道的过程与转发表生成模块204学习获取的MAC信息并获取BGP邻居信息,整合生成转发表这两个过程并没有严格的时序限制,可以先建立隧道,也可以先生成转发表。
转发表传递模块205将转发表发送给本端云数据中心的网关设备,使本端网关设备根据转发表和隧道与待通信云数据中心进行通信。
在转发表传递模块205将生成的转发表发送给本端云数据中心的网关设备时,需要考虑本端云数据中心内部设备是否属于同一规格:当本端云数据中心内的控制器与交换机属于同一规格时,转发表传递模块205可以直接按照现有的方式发送。当本端云数据中心内的控制器与交换机属于不同规格时,可以先对Open flow协议进行扩展,使扩展后的Open flow协议用于操作隧道封装,然后根据扩展的Open flow协议将转发表发送给网关设备。请参考图5,转发表传递模块205包括用于扩展Open flow协议的协议扩展模块2051,以及用于根据扩展后的Open flow协议将转发表发送给网关设备的传递模块2052。协议扩展模块2051之所以选择对Open flow进行扩展,是由于Open flow本身就是一种支持扩展的协议,在本实施例中,为了实现转发表在不同规格设备之间进行转发,协议扩展模块2051包括第一扩展子模块20511、第二扩展子模块20512、第三扩展子模块20513、第四扩展子模块20514,如图6所示:
第一扩展子模块20511在IP头前插入一个新的VXLAN头,弹出最外层的VXLAN头;第二扩展子模块20512设置隧道ID,设置最外层VXLAN头中的VXLAN网络标识;第三扩展子模块20513则插入VXLAN隧道的外层IP头,弹出VXLAN隧道的外层IP头;第四扩展子模块20514用于插入VXLAN隧道的外层MAC头,弹出VXLAN隧道的外层MAC头。
待通信云数据中心也需要获取本端云数据中心的MAC信息,并对获取的MAC信息进行学习,然后结合BGP邻居信息生成其与本端云数据中心进行通信的转发表,并将转发表发送到其网关设备上,但由于待通信云数据中心内部的设备可能出自同一厂家,也能出自不同厂家,因此,其内部设备的接口可能匹配,也可能不匹配。所以毫无疑义的是,在本实施例中,待通信云数据中心内部进行转发表的发送过程并不需要与本端云数据中心进行的一致。
在建立互联之后,两个DC间可以进行正常通信,本端云数据中心内的租户将数据包发送至网关设备上,网关设备根据转发表,利用与待通信云数据中心间的隧道,例如VXLAN隧道将数据包传送至待通信云数据中心的网关设备上。待通信云数据中心的网关设备再将数据包传输给对应租户。
实施例三:
本实施例对两个处于同一网段的云数据中心的互联情况做进一步说明。请结合图7:
用户为两个需要互联的云数据中心搭建处于同一网段的网络环境:
用户创建192.168.2.0/24网段,并分为两个资源池,两个资源池分别在数据中心DC2和DC1中,例如:192.168.2.1~192.168.2.127在DC2中,192.168.2.128~192.168.2.254在DC1中。编排器将资源池的地址范围通知给DC2和DC1的控制器。
用户创建两个IP地址分别为192.168.2.2和192.168.2.203的虚拟机Host21和Host31, 这两个虚拟机分别落在DC2和DC1中。因此,DC2和DC1中的这两个虚拟机有二层互联需求。
由于本公开提供的云数据中心互联方法与装置使DC间达到了设备松耦合或无耦合的效果,因此,本实施例仅以DC2为例来进行说明,本领域技术人员应当明白的是,在DC1内进行的过程DC2内的大致相似:
在DC2内,由控制器2配置网关GW2上用于和GW1互联的物理端口信息port23、网关GW2的vtep-ip地址(隧道端点IP地址)。
编排器通知控制器2在创建网关GW2上创建二层互联虚拟端口:
由控制器提供的RESTFUL接口包含如下信息:全局租户ID(tenant-id)、租户对应的RD、网段信息(suid或subnet/mask);
控制器2为子网段分配VNI号,即VXLAN隧道号;
控制器2在VXLAN GW2的端口port21上创建subtunnel类型的二层虚拟端口,端口类型标识为外部互联端口。
S701、DC2与DC1建立BGP邻居,协商支持EVPN。
S702、DC2的控制器2获取DC1的MAC信息,并将其自己的MAC信息发送给DC1。DC2的控制器2发送的MAC信息如表1所示:
表1
IP MAC VNI VETPIP
192.168.2.2 mac21 vni22 vtep-ip2
DC2的控制器2接收到DC1的MAC信息如表2所示:
表2
IP MAC VNI VETPIP
192.168.2.203 mac12 vni22 vtep-ip1
S703、DC2的控制器2将获取的BGP邻居信息中本端的端口信息与学习到的DC1的MAC信息加以整合,得到的整合信息如表3所示:
表3
IP MAC VNI VETPIP PORT
192.168.2.203 mac12 vni22 vtep-ip1 port21
Host11和Host21通讯时,Host21请求获取Host11的mac信息,交换机vSwitch1收到arp请求,并将arp请求发送给控制器2,控制器2根据学习到的MAC信息查到Host11的mac信息,控制器2响应Host21的arp响应。
S704、DC2的控制器2根据BGP邻居信息与DC1的MAC信息生成转发表。
在本实施例中,由于转发表通过Open flow协议下发到网关GW2上,因此,形成的转发表实质为Open flow流表形式。
控制器2通过交换机vSwtich1给网关GW2下发流表,匹配目的MAC是mac12走二 层交换,引流到GW2;在GW2上出接口是Port21,封装VXLAN。
S705、DC2的控制器2将Open flow流表发送给网关GW2。
流表通过扩展Open flow协议下发,在本实施例中,Open flow协议的扩展规则如下:
扩展支持Optional Action:Push-Tag/Pop-Tag:Action——Push VXLAN header(在IP头前插入一个新的VXLAN头)、Pop VXLAN header(弹出最外层的VXLAN头)。
扩展支持Optional Action:Set-Field:Set Tunnel ID的动作,用来设置最外层VXLAN头中的VXLAN网络标识。
扩展支持Optional Action:Push-Tag/Pop-Tag:Action——Push VTEP-IP header(插入VXLAN隧道的外层IP头)、Pop VTEP-IP header(弹出VXLAN隧道的外层IP头)。
扩展支持Optional Action:Push-Tag/Pop-Tag:Action——Push VTEP-MAC header(插入VXLAN隧道的外层MAC头)、Pop VTEP-MAC header(弹出VXLAN隧道的外层MAC头)。
实施例四:
本实施例对两个处于不同网段的云数据中心的互联情况做进一步说明。请结合图8:
用户为两个需要互联的云数据中心搭建处于不同网段的网络环境。
用户创建192.168.1.0/24网段,该网段部分地址在DC1的资源池中,地址范围为192.168.2.128~192.168.2.254。用户再创建192.168.3.0/24网段,该网段地址全部在DC2的资源池中,下面将描述DC1中IP地址为192.168.1.203的Host11和DC2中IP地址为192.168.3.2的Host21的三层互联。同实施例三一样,在本实施例中,也仅以DC2为例来进行说明,本领域技术人员应当明白的是,在DC1内进行的过程DC2内的大致相似:
DC2和DC1中这两个虚拟机有三层互联需求,控制器2搜集网关GW2上用于和GW1互联的物理端口信息(port21),网关GW2的vtep-ip地址,控制器1类似。
编排器通知控制器2在创建网关GW2上创建三层互联虚拟端口:
控制器2提供的RESTFUL接口包含如下信息:全局租户ID(tenant-id),租户对应的RD,三层互联接口IP。
控制器2为该用户分配VNI号,即VXLAN隧道号;
控制器2在VXLAN GW2的端口port21上创建tunnel类型三层虚端口,IP地址为l3ip2。
S801、DC2与DC1建立BGP邻居,协商支持EVPN。
S802、DC2的控制器2获取DC1的MAC信息,并将其自己的MAC信息发送给DC1。DC2的控制器2发送的MAC信息如表4所示:
表4
IP MAC VNI VETPIP
192.168.2.2 mac21 vni22 vtep-ip2
l3ip2 sysmac2 vni322 vtep-ip2
控制器2发送的路由信息如表5所示:
表5
PREFIX GWIP NEXTHOP
192.168.3.0/24 l3ip2 vtep-ip2
DC2的控制器2接收到DC1的MAC信息如表6所示:
表6
IP MAC VNI VETPIP
192.168.2.203 mac12 vni22 vtep-ip1
l3ip1 sysmac1 vni322 vtep-ip1
DC2的控制器2接收到DC1的路由信息如表7所示:
表7
PREFIX GWIP VETPIP
192.168.1.203/32 l3ip1 vtep-ip1
S803、DC2的控制器2将获取的BGP邻居信息与学习到的DC1的MAC信息加以整合,得到的整合信息如表8所示:
表8
IP MAC VNI VETPIP PORT
l3ip1 sysmac1 vni322 vtep-ip1 port12
S804、DC2的控制器2将获取的BGP邻居信息与学习到的DC1的路由信息加以整合,得到的整合信息如表9所示:
表9
IP NEXTHOP-MAC VNI VETPIP PORT
192.168.1.203/32 sysmac1 Vni322 vtep-ip1 port21
Host12和Host22通讯时,Host22请求获取网关mac信息,交换机vSwitch1收到arp请求,并将arp请求发送给控制器2,控制器2根据学习到的MAC信息查到网关mac信息为sysmac2,控制器2响应Host21的arp响应。
S805、DC2的控制器2根据BGP邻居信息与DC1的MAC信息生成转发表。
在本实施例中,由于转发表通过Open flow协议下发到网关GW2上,因此,形成的转发表实质为Open flow流表形式。
控制器2通过交换机vSwtich1给网关GW2下发流表,匹配目的MAC是mac12走二层交换,引流到GW2;在GW2上出接口是Port21,封装VXLAN。
S806、DC2的控制器2将Open flow流表发送给网关GW2。
流表通过扩展Open flow协议下发,Open flow协议的扩展规则如实施例三所示。
显然,本领域的技术人员应该明白,上述本公开的各模块或各步骤可以用通用的计算装置来实现,它们可以集中在单个的计算装置上,或者分布在多个计算装置所组成的网络 上,可选地,它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在存储介质(ROM/RAM、磁碟、光盘)中由计算装置来执行,并且在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。所以,本公开不限制于任何特定的硬件和软件结合。
以上内容是结合具体的实施方式对本公开所作的进一步详细说明,不能认定本公开的具体实施只局限于这些说明。对于本公开所属技术领域的普通技术人员来说,在不脱离本公开构思的前提下,还可以做出若干简单推演或替换,都应当视为属于本公开的保护范围。
工业实用性
本公开适用于网络通信领域,用以达到云数据中心间设备松耦合或无耦合的效果,而且在建立互联的过程中并未使用任何私有协议就让使用不同规格设备的云数据中心实现了互联,提高了云数据中心互联方式的通用性。

Claims (15)

  1. 一种云数据中心互联方法,包括:
    与待通信云数据中心建立BGP邻居通道,所述待通信云数据中心为本端云数据中心需要进行通信的云数据中心;
    利用所述BGP邻居通道获取所述待通信云数据中心的MAC信息;
    根据获取的所述MAC信息建立所述本端云数据中心网关设备与所述待通信云数据中心网关设备间的隧道;
    学习获取的所述MAC信息并获取与所述待通信云数据中心通信所用的所述本端云数据中心的端口信息,整合所述MAC信息与所述端口信息生成转发表;
    将所述转发表发送给所述本端云数据中心网关设备,使所述网关设备根据所述转发表利用所述隧道与所述待通信云数据中心进行通信。
  2. 如权利要求1所述的云数据中心互联方法,其中,利用所述BGP邻居通道获取所述待通信云数据中心的MAC信息包括:根据以太网虚拟专用网络协议获取所述待通信云数据中心的MAC信息。
  3. 如权利要求1所述的云数据中心互联方法,其中,所述获取与所述待通信云数据中心通信所用的所述本端云数据中心的端口信息包括:根据BGP邻居信息获取与所述待通信云数据中心通信所用的所述本端云数据中心的端口信息。
  4. 如权利要求1所述的云数据中心互联方法,其中,
    当所述待通信云数据中心与所述本端云数据中心处于同一网段时,获取的所述MAC信息包括:所述待通信云数据中心的IP地址、MAC地址、VNI号以及隧道端点IP;
    当所述待通信云数据中心与所述本端云数据中心处于不同网段时,获取的所述MAC信息包括:所述待通信云数据中心的IP地址、MAC地址、VNI号、隧道端点IP以及数据包下一站要到达的MAC地址。
  5. 如权利要求1-4任一项所述的云数据中心互联方法,其中,所述根据获取的所述MAC信息建立隧道包括以下四种中的任意一种:
    根据获取的所述MAC信息建立VXLAN隧道;
    根据获取的所述MAC信息建立GRE隧道;
    根据获取的所述MAC信息建立PBB隧道;
    根据获取的所述MAC信息建立MPLS隧道。
  6. 如权利要求1-4任一项所述的云数据中心互联方法,其中,当所述本端云数据中心内部设备属于不同规格,则将所述转发表发送给网关设备包括:
    扩展Open flow协议,扩展后的Open flow协议用于操作隧道封装;
    根据扩展后的Open flow协议将所述转发表发送给所述网关设备。
  7. 如权利要求6所述的云数据中心互联方法,其中,当所述本端云数据中心网关设备与所述待通信云数据中心网关设备间的隧道为VXLAN隧道时,扩展Open flow协议包括:
    在IP头前插入一个新的VXLAN头,弹出最外层的VXLAN头;
    设置隧道ID,设置最外层VXLAN头中的VXLAN网络标识;
    插入VXLAN隧道的外层IP头,弹出VXLAN隧道的外层IP头;
    插入VXLAN隧道的外层MAC头,弹出VXLAN隧道的外层MAC头。
  8. 一种云数据中心互联装置,包括:
    BGP邻居建立模块,设置为与待通信云数据中心建立BGP邻居通道,所述待通信云数据中心为本端云数据中心需要进行通信的云数据中心;
    MAC信息获取模块,设置为利用所述BGP邻居通道获取所述待通信云数据中心的MAC信息;
    隧道建立模块,设置为根据获取的所述MAC信息建立所述本端云数据中心网关设备与所述待通信云数据中心网关设备间的隧道;
    转发表生成模块,设置为学习获取的所述MAC信息并获取与所述待通信云数据中心通信所用的所述本端云数据中心的端口信息,整合所述MAC信息与所述端口信息生成转发表;
    转发表传递模块,设置为将所述转发表发送给所述本端云数据中心网关设备,使所述网关设备根据所述转发表利用所述隧道与所述待通信云数据中心进行通信。
  9. 如权利要求8所述的云数据中心互联装置,其中,所述MAC信息获取模块根据以太网虚拟专用网络协议获取待通信云数据中心的MAC信息。
  10. 如权利要求8所述的云数据中心互联装置,其中,所述转发表生成模块根据BGP邻居信息获取与所述待通信云数据中心通信所用的所述本端云数据中心的端口信息。
  11. 如权利要求8所述的云数据中心互联装置,其中,所述MAC信息获取模块包括:
    第一获取子模块,设置为当所述待通信云数据中心与所述本端云数据中心处于同一网段,获取的所述MAC信息包括:所述待通信云数据中心的IP地址、MAC地址、VNI号以及隧道端点IP;
    第二获取子模块,设置为当所述待通信云数据中心与所述本端云数据中心处于不同网段,获取的所述MAC信息包括:所述待通信云数据中心的IP地址、MAC地址、VNI号、隧道端点IP以及数据包下一站要到达的MAC地址。
  12. 如权利要求8-11任一项所述的云数据中心互联装置,其中,所述隧道建立模块包括以下四种中的至少一种:
    VXLAN隧道建立子模块,设置为根据获取的所述MAC信息建立VXLAN隧道;
    GRE隧道建立子模块,设置为根据获取的所述MAC信息建立GRE隧道;
    PBB隧道建立子模块,设置为根据获取的所述MAC信息建立PBB隧道;
    MPLS隧道建立子模块,设置为根据获取的所述MAC信息建立MPLS隧道。
  13. 如权利要求8-11任一项所述的云数据中心互联装置,其中,当所述本端云数据中心内部设备属于不同规格时,所述转发表传递模块包括:
    协议扩展模块,设置为扩展Open flow协议,扩展后的Open flow协议用于操作隧道封装;
    传递模块,设置为根据扩展后的Open flow协议将所述转发表发送给所述网关设备。
  14. 如权利要求13所述的云数据中心互联装置,其中,当所述隧道建立模块包括VXLAN隧道建立子模块时,协议扩展模块包括:
    第一扩展子模块,设置为在IP头前插入一个新的VXLAN头,弹出最外层的VXLAN头;
    第二扩展子模块,设置为设置隧道ID设置最外层VXLAN头中的VXLAN网络标识;
    第三扩展子模块,设置为插入VXLAN隧道的外层IP头,弹出VXLAN隧道的外层IP头;
    第四扩展子模块,设置为插入VXLAN隧道的外层MAC头,弹出VXLAN隧道的外层MAC头。
  15. 一种计算机存储介质,所述计算机存储介质存储有执行指令,所述执行指令用于执行权利要求1至7中任一项所述的方法。
PCT/CN2017/075871 2016-03-16 2017-03-07 云数据中心互联方法及装置 WO2017157206A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610150812.3 2016-03-16
CN201610150812.3A CN107204907B (zh) 2016-03-16 2016-03-16 云数据中心互联方法及装置

Publications (1)

Publication Number Publication Date
WO2017157206A1 true WO2017157206A1 (zh) 2017-09-21

Family

ID=59850081

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/075871 WO2017157206A1 (zh) 2016-03-16 2017-03-07 云数据中心互联方法及装置

Country Status (2)

Country Link
CN (1) CN107204907B (zh)
WO (1) WO2017157206A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112671629A (zh) * 2020-09-24 2021-04-16 紫光云技术有限公司 一种云网络下专线接入的实现方法

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107948041B (zh) * 2017-11-22 2020-12-18 锐捷网络股份有限公司 构建vxlan集中式多活网关的方法和设备
CN110798405A (zh) * 2018-08-01 2020-02-14 中国电信股份有限公司 数据隧道交换方法、装置和系统
CN111917646B (zh) * 2019-05-10 2023-04-07 上海叠念信息科技有限公司 基于sd-wan的多数据中心互联优选的实现方法和系统
CN110868474B (zh) * 2019-11-20 2022-11-04 无锡华云数据技术服务有限公司 一种互联网元及网络互通方法、系统、设备、计算机介质
CN112838985B (zh) * 2019-11-25 2024-04-02 中兴通讯股份有限公司 一种异构网络通信方法、系统和控制器
CN111343070B (zh) * 2020-03-03 2021-07-09 深圳市吉祥腾达科技有限公司 sdwan网络通信控制方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102263704A (zh) * 2011-09-01 2011-11-30 杭州华三通信技术有限公司 一种支持数据中心二层互联的拓扑构建方法和装置
CN103416025A (zh) * 2010-12-28 2013-11-27 思杰系统有限公司 用于经由云桥添加vlan标签的系统和方法
CN104378297A (zh) * 2013-08-15 2015-02-25 杭州华三通信技术有限公司 一种报文转发方法及设备

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102739501B (zh) * 2011-04-01 2017-12-12 中兴通讯股份有限公司 二三层虚拟私有网络中的报文转发方法和系统
CN102316030B (zh) * 2011-09-01 2014-04-09 杭州华三通信技术有限公司 一种实现数据中心二层互联的方法和装置
JP5797849B2 (ja) * 2011-11-03 2015-10-21 華為技術有限公司Huawei Technologies Co.,Ltd. ホストが仮想プライベートネットワークに参加/離脱するための境界ゲートウェイプロトコルの拡張
CN102710509B (zh) * 2012-05-18 2015-04-15 杭州华三通信技术有限公司 一种数据中心自动配置方法及其设备
US9325636B2 (en) * 2013-06-14 2016-04-26 Cisco Technology, Inc. Scaling interconnected IP fabric data centers
US9509603B2 (en) * 2014-03-31 2016-11-29 Arista Networks, Inc. System and method for route health injection using virtual tunnel endpoints

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103416025A (zh) * 2010-12-28 2013-11-27 思杰系统有限公司 用于经由云桥添加vlan标签的系统和方法
CN102263704A (zh) * 2011-09-01 2011-11-30 杭州华三通信技术有限公司 一种支持数据中心二层互联的拓扑构建方法和装置
CN104378297A (zh) * 2013-08-15 2015-02-25 杭州华三通信技术有限公司 一种报文转发方法及设备

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112671629A (zh) * 2020-09-24 2021-04-16 紫光云技术有限公司 一种云网络下专线接入的实现方法

Also Published As

Publication number Publication date
CN107204907B (zh) 2021-03-26
CN107204907A (zh) 2017-09-26

Similar Documents

Publication Publication Date Title
WO2017157206A1 (zh) 云数据中心互联方法及装置
CN106936777B (zh) 基于OpenFlow的云计算分布式网络实现方法、系统
CN107210961B (zh) 用于虚拟路由和转发路由泄漏的方法和装置
EP2856706B1 (en) Routing vlan tagged packets to far end addresses of virtual forwarding instances using separate administrations
US10009267B2 (en) Method and system for controlling an underlying physical network by a software defined network
EP2874359B1 (en) Extended ethernet fabric switches
US9900263B2 (en) Non-overlay resource access in datacenters using overlay networks
Bakshi Considerations for software defined networking (SDN): Approaches and use cases
US9992104B2 (en) Communication method, communication system, resource pool management system, switch device and control device
US9509522B2 (en) Forwarding multicast data packets
US9338097B2 (en) Method and system for load balancing at a data network
US9154416B2 (en) Overlay tunnel in a fabric switch
US20160226678A1 (en) Method and System for Virtual and Physical Network Integration
US10523464B2 (en) Multi-homed access
EP3197107B1 (en) Message transmission method and apparatus
US9819574B2 (en) Concerted multi-destination forwarding in a joint TRILL fabric and VXLAN/IP fabric data center
CN108880970A (zh) 端口扩展器的路由信令和evpn收敛
CN105376154A (zh) 渐进式mac地址学习
EP3069471B1 (en) Optimized multicast routing in a clos-like network
CN104869042A (zh) 报文转发方法和装置
WO2012152178A1 (zh) 获知端口扩展拓扑信息的方法、系统和控制桥
CN107995083B (zh) 实现L2VPN与VxLAN互通的方法、系统及设备
US9509610B2 (en) Forwarding packet in stacking system
WO2014180199A1 (zh) 网络建立的方法及控制设备
US11228459B2 (en) Anycast address configuration for extended local area networks

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17765748

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17765748

Country of ref document: EP

Kind code of ref document: A1