CN109660442B - Method and device for multicast replication in Overlay network - Google Patents

Method and device for multicast replication in Overlay network Download PDF

Info

Publication number
CN109660442B
CN109660442B CN201811591207.5A CN201811591207A CN109660442B CN 109660442 B CN109660442 B CN 109660442B CN 201811591207 A CN201811591207 A CN 201811591207A CN 109660442 B CN109660442 B CN 109660442B
Authority
CN
China
Prior art keywords
node
root node
subnet
multicast
site
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811591207.5A
Other languages
Chinese (zh)
Other versions
CN109660442A (en
Inventor
纪阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dt Dream Technology Co Ltd
Original Assignee
Hangzhou Dt Dream Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dt Dream Technology Co Ltd filed Critical Hangzhou Dt Dream Technology Co Ltd
Priority to CN201811591207.5A priority Critical patent/CN109660442B/en
Publication of CN109660442A publication Critical patent/CN109660442A/en
Application granted granted Critical
Publication of CN109660442B publication Critical patent/CN109660442B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The method firstly determines the type of nodes, wherein the type of the nodes comprises a root node and a non-root node, and the root node at least comprises one of the following nodes: the system comprises a subnet root node, a site root node and a global root node; and according to the type of the node and the obtained network topology structure, sending the IP address to each node, and informing each node to establish a direct connection tunnel according to the received IP address so that each node forwards the multicast message through the direct connection tunnel after receiving the multicast message. The method and the device can reduce the replication of the multicast flow in the Overlay network, save the CPU resource of the head-end equipment and save the network bandwidth inside the data center.

Description

Method and device for multicast replication in Overlay network
Technical Field
The present application relates to Overlay network technologies, and in particular, to a method and an apparatus for multicast replication in an Overlay network.
Background
In a cloud computing data center, under an Overlay networking model based on a VXLAN (Virtual eXtensible Local Area Network) technology, traffic to be multicast-forwarded is mainly two-layer broadcast traffic such as ARP, DHCP and other messages, and at this time, each Virtual machine or non-virtualized physical host in the same two-layer domain of the Overlay Network may be a multicast source and a multicast member. In a current cloud computing data center, a network does not usually deploy a multicast forwarding function of a network device, and point-to-multipoint forwarding is usually realized by a way that a switch copies a head end of a multicast packet.
When the range of the VXLAN network is large, especially when multiple data centers exist in the Overlay network, a switch responsible for performing head end replication on multicast messages needs to perform a large amount of head end replication, which results in excessive CPU occupation, resource waste, and waste of cross-point (location of device deployment) inside the data center and network bandwidth between sub-networks.
Disclosure of Invention
The application provides a method and a device for multicast replication in an Overlay network, which can reduce replication of multicast traffic in the Overlay network, save CPU resources of head-end equipment, and save network bandwidth inside a data center.
According to a first aspect of the embodiments of the present application, there is provided a method for multicast replication in an Overlay network, which operates on an SDN controller, the method including the steps of:
determining a type of a node, the type of the node comprising a root node and a non-root node, the root node comprising at least one of: the system comprises a subnet root node, a site root node and a global root node;
and according to the type of the node and the obtained network topology structure, sending the IP address to each node, and informing each node to establish a direct connection tunnel according to the received IP address so that each node forwards the multicast message through the direct connection tunnel after receiving the multicast message.
According to a second aspect of the embodiments of the present application, there is provided an apparatus for multicast replication in an Overlay network, located on an SDN controller, including:
a role decision module, configured to determine types of nodes, where the types of nodes include a root node and a non-root node, and the root node includes at least one of: the system comprises a subnet root node, a site root node and a global root node;
and the communication module is used for sending the IP address to each node according to the type of the node and the obtained network topology structure and informing each node of forwarding the multicast message through the direct connection tunnel after each node receives the multicast message according to the received IP address.
The invention utilizes the SDN controller to perform centralized control on the switches in the network, sets the type of each node on the SDN controller, and establishes tunnels between nodes at different levels according to different types, thereby establishing a tree structure containing multi-level nodes in the virtual network, and multicast messages are copied and forwarded layer by layer through the nodes at different levels, thereby reducing the copy of multicast flow in the Overlay network, saving the CPU resource of head-end equipment, and saving the network bandwidth in a data center.
Drawings
FIG. 1 is a more general architecture diagram of an Overlay network in an embodiment of the present application;
fig. 2 is a flowchart of a multicast replication method in an Overlay network according to an embodiment of the present application;
fig. 2 a-2 d are path diagrams of multicast packets transmitted by each switch in the network architecture shown in fig. 1;
fig. 2e is a network topology diagram after a tunnel is generated under the network architecture shown in fig. 1;
FIG. 3a is an architecture diagram of an Overlay network in an example of the application of the present application;
fig. 3b is a flowchart of a multicast replication method in an Overlay network in an application example of the present application;
fig. 4 is a hardware architecture diagram of a device for multicast replication in an Overlay network according to an embodiment of the present application;
fig. 5 is a software logic block diagram of an apparatus for multicast replication in an Overlay network in an application example of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The Overlay network may be implemented based on VXLAN technology. VXLAN is a technology for encapsulating a two-layer message with a three-layer protocol, and can expand a two-layer network message in a three-layer network range.
VXLAN can be applied to the inside of a data center, so that virtual machines can migrate in a three-layer network range which is communicated with each other, IP addresses and MAC addresses do not need to be changed, and the continuity of services is guaranteed. The VXLAN adopts a 24-bit network identifier, so that a user can create 16M virtual networks which are mutually isolated, the limitation of 4K isolated networks which can be expressed by a VLAN technology is broken through, and sufficient virtual network partition resources are provided in a large-scale multi-tenant cloud environment.
VXLAN realizes the isolation of virtual network and physical network by arranging intelligent entity VTEP (VXLAN Tunnel End Point, VXLAN Tunnel End node) at the edge of physical network. And a tunnel is established between the VTEPs, and data frames of the virtual network are transmitted on the physical network, and the physical network does not sense the virtual network.
Fig. 1 is a block diagram of a more common VXLAN network to which the present application is applicable. Two VXLAN networks are included in the figure, each VXLAN network representing a data center site. Each VXLAN network typically includes a physical server 101, a gateway 102, and a router 103 connected in sequence.
Each virtual machine (virtual machine 1 … 16) and switch (switch 1 … 8) may be deployed on a physical server 101. The switch and the gateway 102 can be used as a VTEP device of a VXLAN network, and the tunnel establishment is used to send the message of the virtual machine in the virtual network to other virtual machines through the physical network.
As shown in fig. 1, switch 1 and switch 2 are located on the same physical server 101, and both switches have the same network segment and different IP addresses. In the present application, switches with the same network segment and different addresses in the same site are referred to as switches in the same subnet.
In the figure, the switches in subnet 1 (switch 1 and switch 3) and the switches in subnet 2 (switch 3 and switch 4) are located in the same VXLAN network, and the switches of both subnets have the same VXLAN range and different segments. In the present application, switches with the same VXLAN range and different network segments are referred to as switches of the same site.
Switches ( switches 5, 6, 7, 8) in VXLAN1 ( switches 1, 2, 3, 4) and VXLAN2 are located in different VXLAN networks in the figure, and the switches VXLAN in the two networks are different in scope. Switches with different VXLAN scopes will be referred to herein as switches at different sites.
In the application, an SDN controller 104 is deployed in an Overlay network, and switches of various stations are centrally controlled by the SDN controller. In one example, a user may pre-configure a switch device of an Overlay network to establish a connection with an SDN controller, and the SDN controller communicates with the switch via an Openflow or Netconf standard, although other standards are also possible. The SDN controller will establish unicast VXLAN tunnels between all VTEP devices, carrying all VXLAN traffic between VTEPs.
Fig. 2 illustrates the operation principle of the SDN controller in the present application. Hereinafter, a virtual machine is referred to as a multicast source or a multicast member, and a switch is referred to as a node.
S201, determining the type of a node, wherein the type of the node comprises a root node and a non-root node, and the root node at least comprises one of the following: the system comprises a subnet root node, a site root node and a global root node;
and S202, according to the type of the node and the obtained network topology structure, sending the IP address to each node, and informing each node to establish a direct connection tunnel according to the received IP address, so that each node forwards the multicast message through the direct connection tunnel after receiving the multicast message.
For S201, in an embodiment, to determine the type of the node, the SDN controller may obtain a processing capability parameter and location information of each node; the processing capability parameter may include whether to support a multicast replication capability parameter, and the like; the address information includes site parameters, subnet parameters, network segment parameters, etc. to which the switch belongs. The connection relation between the nodes can be calculated according to the position information of the nodes, so that the nodes which pass through when the multicast source reaches each multicast member can be selected. As an example, a virtualization platform (see the virtualization platform in fig. 3 a) may be further included in the Overlay network of the present application, and a communication connection channel is established between the SDN controller and the virtualization platform for information interaction. The processing capability parameters and the location information of each node acquired by the SDN controller may come from a virtualization platform.
The virtualization platform is used for realizing management of each virtual machine. The user can pre-configure the virtualization platform to establish management connection with the physical server where the switch is located, control the life cycle of the virtual machine on the physical server through related standard message interface communication, including creation, deletion, start, stop and the like of the virtual machine, and in addition, the virtualization platform can also manage the hardware resources of all the physical servers, including information such as a CPU, a memory and the like, and position information.
The virtualization platform notifies the SDN controller of the processing capability parameters (e.g., CPU capability and number) and location information of each physical server, so that the SDN controller maintains and stores a database about the multicast replication capability of the switch, the location of the physical server, the VTEP IP address and the network segment of the switch deployed on the physical server. In addition, when the virtualization platform deploys the virtual machine in the Overlay network, the virtual machine is deployed to a certain physical server, and the virtual machine is connected to the external network through a switch on the physical server, and a VXLAN, MAC, IP address is associated to each virtual network card of the virtual machine. The virtualization platform may advertise this information to the SDN controller, thereby making the SDN controller aware of the switches connected by this virtual machine.
In the application, the multicast forwarding message is copied to the root node of the next level step by the root nodes of all levels, so that the strategy of selecting which nodes to act as the root nodes can refer to the processing capacity parameters of all nodes. The processing capability parameter may be whether the node has multicast replication capability, a level of multicast replication capability, and so on. In one embodiment, when selecting the root nodes of each level, the node with the highest multicast replication capability may be preferred, and if two or more nodes with the highest multicast replication capability exist, one node with a smaller VTEP IP address may be selected.
Specifically, when determining the global root node, a node with the highest multicast replication capability among nodes of all sites in the Overlay network may be configured as the global root node as a candidate, and if there are a plurality of nodes with the same multicast replication capability, one node with a smaller VTEP IP address may be configured as the global root node. When determining a site root node of a site, a node with the highest multicast replication capability among all nodes in the site may be used as a candidate, and if there are multiple nodes with the same multicast replication capability, one node with a smaller VTEP IP address may be used as the site root node. When determining the subnet root node, all nodes in the subnet may be used as candidates, and the node with the highest multicast replication capability may be configured as the subnet root node.
Taking fig. 1 as an example, when the switch 3 is a multicast member of the switch 1, because the switch 3 and the switch are in different subnets, it is necessary to determine a subnet root node of each subnet and a site root node connecting the two subnets, and assuming that the switches 1, 2, 3, and 4 all have multicast replication capabilities and the multicast replication capabilities of the switch 3 and the switch 4 are the same, when selecting the subnet root node, one of the switches 3 and 4 with a smaller VTEP IP address may be selected as the subnet root node of the subnet in which the switch 3 and the switch 4 are located.
Table 1 lists examples of processing capability parameters and location information of nodes stored by an SDN controller.
TABLE 1
Figure BDA0001920310720000071
Table 2 is an example of a root node calculated according to table 1 as each level.
TABLE 2
Figure BDA0001920310720000072
In one embodiment, the policy for selecting which nodes to act as root nodes may further include information about the root nodes that the node has currently acted upon, so as to allow for consideration of the current CPU load of the node when selecting the root node. The information recorded may include VXLAN network identification, a list of IP addresses of nodes at both ends of a corresponding tunnel in the VXLAN network.
For S202, since the packet in the VXLAN network needs to be transmitted in the physical network through the tunnel, after determining each level of nodes according to the method disclosed in S201, the SDN controller needs to notify the nodes that need to establish the tunnel but do not establish the tunnel to establish the tunnel that the tunnel needs to be established, and the specific process may be:
the SDN controller traverses all selected root nodes and receivers,
judging whether the node is a subnet root node and a tunnel with the subnet root node is not established, if so, informing the node of establishing the tunnel with the subnet root node; the path between the receiver and the subnet root node is the tunnel between the receiver and the subnet root node.
If the node is the subnet root node and the tunnel from the node to the site root node is not established, the node is informed to establish the tunnel with the site root node; the paths between the recipients of different subnets are both tunnels to the respective subnet root node and tunnels from the respective subnet root node to the site root node.
If the node is the site root node and a tunnel from the node to the global root node is not established, the node is informed of establishing the tunnel with the global root node; the paths between the recipients at different sites are the tunnel of both to the respective subnet root node, the tunnel of the respective subnet root node to the site root node, and the tunnel of the site root node to the global root node.
After the tunnel is established, two nodes of the tunnel are traversed, and if the corresponding tunnel on the node does not join the VXLAN, the tunnels are added into the corresponding VXLAN.
Nodes at various levels in the network may include a root node from the multicast source to each multicast member, and a non-root node (hereinafter referred to as a receiver) within the same subnet as the subnet root node. The number of tunnels included in the multicast forwarding path may be different according to the position of the multicast member. By executing the steps S201 and S201, the nodes can be established into a network structure which is mutually connected in a tree structure, and it can be seen that according to the established network structure, the subnet root node is responsible for copying the multicast message according to the number of the nodes in the same subnet, and forwarding the multicast message to other nodes in the same subnet, and/or forwarding the multicast message to the site root node; the site root node is responsible for copying the multicast message according to the number of the subnet root nodes in the same site, and forwarding the multicast message to the subnet root nodes in the same site, and/or forwarding the multicast message to the global root node; the global root node is responsible for copying the multicast messages according to the number of the site root nodes and forwarding the multicast messages to the site root nodes.
For example, when the multicast source and the multicast member are in the same subnet, the root node may be a subnet root node, at this time, the packet of the multicast source needs to be sent to the subnet root node, and the multicast forwarding path that passes through may be forwarded to each node in the subnet by the subnet root node.
When the multicast source and the multicast member are in different subnets of the same site, the root node may be a subnet root node and a site root node, at this time, the packet of the multicast source needs to be sent to the subnet root node of the subnet 1, the multicast forwarding path that passes through may be that the subnet root node of the subnet 1 forwards the multicast packet to the site root node, and the site root node forwards the multicast packet to the subnet root node of the subnet 2 where the multicast member is located.
When the multicast source and the multicast member are in different sites, the root node may include a global root node, a site root node, and a subnet root node, and at this time, the packet of the multicast source needs to be forwarded to the site root node of the site 1 by the subnet root node of the site 1, and the multicast forwarding path that passes through may be that the site root node of the site 1 forwards the multicast packet to the global root node, the global root node forwards the multicast packet to the site root node of the site 2 of the multicast member, and then the site root node of the site 2 forwards the multicast packet to the subnet root node of the subnet 2 of the multicast member. The method and the device can solve the problem of network bandwidth waste between data centers caused by multiple copies across data middles in the prior art.
Still taking the data in table 1 and table 2 as an example, as shown in fig. 2e, the network topology after the SDN controller performs root node selection and tunnel establishment, it can be seen that a multicast tree rooted at a global root node may be formed in the network at this time.
As an embodiment, in the tunnel update process, when a new multicast member joins or exits the multicast group, the virtualization platform notifies the SDN controller of the IP and VXLAN of the virtual machine and the physical server information of the virtual machine. The SDN controller gets the switch to which this virtual machine is connected.
In one example, for the case of joining a new multicast member, if the switch directly connected to the multicast member is an existing switch in an Overlay network, a path of the new multicast member to be joined may be obtained according to a previously established tunnel; if the switch directly connected with the multicast member is a switch which does not exist in the Overlay network, the switch is used as a receiver of the subnet where the switch is located, and the switch is informed to establish a tunnel between the switch and a subnet root node in the subnet, so that a path of a newly added multicast member is obtained.
For the situation of revoking the virtual machine, if the switch directly connected with the virtual machine is also connected with other switches, the multicast forwarding tree channel does not need to be updated. If the switch is not connected with other virtual machines in the same site, pruning is carried out, namely if the switch is not connected with any other virtual machine of the VXLAN any more and only one tunnel is positioned in the site, the tunnel is deleted from the site; the opposite end device of the tunnel performs the same processing.
Fig. 2a to fig. 2d are schematic diagrams of paths for multicast forwarding according to the tunnel established in steps S201 and S202 when the multicast source is virtual machine 2 and the multicast members are virtual machines 1, 3, 4, 5, 6, 7, and 8.
As shown in fig. 2a, since the switch 1 and the switch 2 belong to nodes in the same subnet, the nodes through which the multicast packet of the virtual machine 2 reaches the virtual machine 1 are: switch 2 (node directly connected to multicast source) -switch 1 (subnet root node 1);
as shown in fig. 2b, since the switch 3 and the switch 2 belong to nodes of different subnets, the nodes through which the multicast packet of the virtual machine 2 reaches the virtual machine 3 are: switch 2-switch 1 (subnet root 1 of subnet 1.1.1.0 and root of site 1) -switch 3 (subnet root 2 of subnet 1.1.2.0);
as shown in fig. 2c, since the switch 4 and the switch 2 belong to nodes of different subnets, the nodes through which the multicast packet of the virtual machine 2 reaches the virtual machine 4 are: switch 2-switch 1 (subnet root node 1 of subnet 1.1.1.0 and site root node of site 1) -switch 3 (subnet root node 2 of subnet 1.1.2.0) -switch 4;
as shown in fig. 2d, since the switch 5 and the switch 2 belong to nodes of different sites, the multicast packet of the virtual machine 2 reaches the node in the site 1 experienced by the virtual machine 5: switch 2 — switch 1 (subnet root node of subnet 1.1.1.0, site root node of site 1, global root node); the nodes in site 2 are as shown by number 1 in the figure: switch 6 (site root node for site 2 and subnet root node 3 for 2.2.2.0) — switch 5 (receiver);
since the switch 6 and the switch 2 belong to nodes of different sites, the nodes through which the multicast packet of the virtual machine 2 reaches the virtual machine 6 are: switch 2 — switch 1 (subnet root node for subnet 1.1.1.0 and site root node, global root node for site 1); the nodes in site 2 are as shown by number 2 in the figure: switch 6 (site root node for site 2 and subnet root node 3 for 2.2.2.0);
since the switch 7 and the switch 2 belong to nodes of different sites, the nodes through which the multicast packet of the virtual machine 2 reaches the virtual machine 7 are: switch 2 — switch 1 (subnet root node for subnet 1.1.1.0 and site root node, global root node for site 1); the nodes in site 2 are as shown in figure number 3: switch 6 (site root node for site 2) -switch 8 (subnet root node 4 for 3.3.3.0) -switch 7 (receiver);
since the switch 8 and the switch 2 belong to nodes of different sites, the nodes through which the multicast packet of the virtual machine 2 reaches the virtual machine 8 are: switch 2 — switch 1 (subnet root node of subnet 1.1.1.0, site root node of site 1, global root node); the nodes in site 2 are shown as number 4 in the figure: switch 6 (site root node for site 2) -switch 8 (subnet root node 4 for 3.3.3.0).
Fig. 3a is a network architecture diagram in a specific application scenario of the present application. Two VXLAN networks are included in the Overlay network, as well as an SDN controller and a virtualization platform. The SDN controller and the virtualization platform manage physical servers where the switches are located through a management network, wherein the SDN controller manages the switches, and the virtualization platform manages virtual machines.
By utilizing the centralized control of the SDN controller on the switches, a user inputs multicast replication strategy information based on the SDN controller, when a physical server where the switches are located deploys virtual machines, the range of VXLAN of the switches is identified according to the VXLAN where the virtual machines are located, whether the VXLAN crosses a data center is identified according to the physical server where the virtual machines are located, whether the VXLAN crosses different network segments is identified according to the IP addresses of the switches, and the SDN controller acquires the CPU multicast replication capacity of each switch from a virtualization platform to comprehensively determine nodes responsible for multicast replication.
The process of implementing the multicast replication scheme of the present application in this example is as follows:
referring to fig. 3b, S301, a user pre-configures a network;
A. the method comprises the steps that a switch of an Overlay network is pre-configured by a user to establish connection with an SND (network node device) controller, and the communication protocol of the SDN controller is Openflow.
B. The user pre-configures the virtualization platform to establish management connection with a physical server where the switch is located, so that the virtualization platform can communicate through a related standard message interface to control the life cycle of the virtual machine on the physical server.
C. And a communication connection channel is established between the SDN controller and the virtualization platform and used for interacting information related to the virtual machine.
S302, the virtualization platform interacts information related to the virtual machine with the SDN controller. And the virtualization platform sends hardware resources of a physical server where all the switches are located, including multicast replication capacity parameters, VTEP IP addresses and network segment information of the switches deployed on the physical server to the SDN controller.
S303, the SDN controller selects a root node and a receiver at each level.
The SDN controller selects a multicast source to a root node and a receiver between multicast members with reference to the multicast replication capability of each switch at the same time according to the relevant information about the virtual machine obtained in S302.
Assuming that the relevant information of each switch stored in the multicast replication node decision database of the SDN is shown in table 1, the selected root node may be shown in table 2.
S304, informing each node to establish tunnel connection between nodes of each level. The notification may carry the IP addresses of the nodes at both ends. The tree structure formed by the established tunnels is shown in fig. 2 e.
After selecting the experienced nodes between the multicast source and the multicast members, traversing all related nodes, and if the node is not a subnet root node and a subnet root node tunnel is not established, informing the node to establish the subnet root node tunnel; if the node is a subnet root node and a tunnel from the node to a site root node is not established, the node is informed of establishing the tunnel to the site root node; and if the node is the site root node of the node and the tunnel from the node to the global root node is not established, the node is informed of establishing the tunnel to the global root node.
S305: when the exchanger 1 is used as the multicast source to send the multicast to the exchangers 2, 3, 4, 5, 6 and 7, each node copies and forwards the multicast message according to the established tunnel. The method comprises the following specific steps:
the switch 1 is used as a subnet root node, and a tunnel between the switch 1 and the switch 2 exists, so that the switch 1 copies 1 part of the multicast message and forwards the multicast message to the switch 2;
because the switch 1 is a global root node, the switch 1 does not need to forward the message in the uplink after completing the multicast replication in the subnet, and directly starts the downlink forwarding. As the station root nodes connected under the switch 1 are the switch 1 and the switch 6, only 1 part of multicast message needs to be copied and sent to the switch 6;
for a site 1, the switch 1 is used as a site root node, and the connected subnet root nodes are the switch 1 and the switch 3, so that the switch 1 copies 1 multicast message again and sends the multicast message to the switch 3; the switch 3 is used as a subnet root node, copies 1 multicast message and sends the multicast message to the switch 4.
For the site 2, the switch 6 is used as a site root node, and 1 multicast message is copied and sent to the switch 8; the switch 6 is used as a subnet root node, and 1 part of multicast message is copied and sent to a receiver switch 5 in the subnet; the switch 8 is used as a subnet root node, and 1 part of multicast message is copied and sent to the receiver switch 7 in the subnet.
Corresponding to the foregoing embodiment of the method for multicast replication in an Overlay network, the present application also provides an embodiment of a device for multicast replication in an Overlay network.
The embodiment of the device for multicast replication in the Overlay network can be applied to an SDN controller. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. Taking software implementation as an example, as a device in a logical sense, the device is formed by reading corresponding computer program instructions in the nonvolatile memory into the memory for operation through the processor of the SDN controller where the device is located. In terms of hardware, as shown in fig. 4, a hardware structure diagram of an SDN controller where a device for multicast replication in an Overlay network is located in the present application is shown, except for the processor, the memory, the network interface, and the nonvolatile memory shown in fig. 4, an SDN controller where the device is located in an embodiment may also include other hardware according to an actual function of the SDN controller, which is not described again.
Referring to fig. 5, an apparatus 500 for multicast replication in an Overlay network, located on an SDN controller, includes:
a role decision module 501, configured to determine types of nodes, where the types of nodes include a root node and a non-root node, and the root node includes at least one of the following: the system comprises a subnet root node, a site root node and a global root node;
the communication module 502 sends the IP address to each node according to the type of the node and the obtained network topology, and notifies each node to establish a direct connection tunnel according to the received IP address, so that each node forwards the multicast packet through the direct connection tunnel after receiving the multicast packet.
As an embodiment, the determining, by the role decision module 501, the node type specifically includes:
configuring the node with the strongest multicast replication capability in the same subnet or the node which supports the multicast replication capability and has the highest CPU capability and the smallest address in the same subnet as a subnet root node;
configuring a node with the strongest multicast replication capability in the same site or a node which supports the multicast replication capability and has the highest CPU capability and the smallest address in the same site as a site root node;
and configuring the nodes which support the multicast replication capability and have the strongest multicast replication capability in all the sites, or the switch which supports the multicast replication capability and has the highest CPU capability and the smallest address in all the sites as a global root node.
The notifying, by the communication module 502, each node of establishing a direct connection tunnel may specifically include:
judging whether the node is a subnet root node and a tunnel to the subnet root node is not established, and if the node is not the subnet root node and the tunnel to the subnet root node is not established, informing the node of establishing the tunnel between the node and the subnet root node;
judging whether the node is a subnet root node and a tunnel from the node to a site root node is not established, and if so, informing the node of establishing the tunnel between the node and the site root node;
and if the node is the site root node and the tunnel from the node to the global root node is not established, the node is informed to establish the tunnel between the node and the global root node.
The communication module 502 may also be configured to:
when the multicast source or the multicast member is cancelled, if the directly connected node is not connected with other multicast sources or multicast members of the same site and only one tunnel is located in the site, the node is informed to delete the tunnel in the site; and if the opposite end node of the node at the other end of the tunnel is not connected with other multicast sources or multicast members of the same site and only one tunnel is positioned in the site, informing the opposite end node of deleting the tunnel in the site.
The communication module 502 may also be configured to:
when a multicast member is added, if the node directly connected with the multicast member is a node which does not exist in the network, the node is informed to establish a tunnel between the node and a subnet root node in the subnet.
The role decision module 501 may also be configured to obtain, from a virtualization platform, processing capability parameters and location information of each node in the Overlay network; the processing capability parameter comprises whether the multicast replication capability parameter and the multicast replication capability parameter are supported or not; the address information comprises site parameters, subnet parameters and network segment parameters of the switch; and storing the processing capability parameter and the location information.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (8)

1. A method for multicast replication in an Overlay network is characterized in that the method is applied to a node connected with a multicast source in the Overlay network; the Overlay network comprises at least one site, and each site comprises at least one subnet; after receiving a notification for establishing a direct connection tunnel issued by an SDN controller based on the node types and a network topology structure, each node in the Overlay network is connected in a tree structure to form a multicast tree, wherein the multicast tree is as follows: the root node of the multicast tree is a global root node, and the global root node establishes a direct connection tunnel with the site root node of each site respectively; each site root node and a subnet root node of each subnet included by the site establish a direct connection tunnel; each subnet root node and a non-root node in the subnet establish a direct connection tunnel;
when the multicast source is located in a first subnet of a first site and the multicast member is located in a second subnet of the first site, the method comprises:
receiving a multicast message sent by a multicast source;
determining a multicast forwarding path for forwarding the multicast message according to the multicast tree;
and forwarding the multicast message based on the multicast forwarding path.
2. The method of claim 1, wherein the determining a multicast forwarding path for forwarding the multicast packet according to the multicast tree comprises:
determining a first subnet root node of the first subnet, a first site root node which is connected with the first subnet root node and belongs to a first site, a second subnet root node which is connected with the first site root node and belongs to a second subnet, and a node which is directly connected with a multicast member according to the multicast tree;
determining that the nodes on the multicast forwarding path are sequentially: the first subnet root node, the first site root node, the second subnet root node, a node directly connected with a multicast member, and the multicast member.
3. The method of claim 1, wherein the multicast tree in the Overlay network is established by:
an SDN controller in the Overlay network determines the type of each node, and if the node is not a subnet root node and a subnet root node tunnel is not established, the SDN controller informs the node of establishing the tunnel between the node and the subnet root node; judging whether the node is a subnet root node and a tunnel from the node to a site root node is not established, and informing the node of establishing the tunnel between the node and the site root node; and if the node is the site root node and the tunnel from the node to the global root node is not established, the node is informed to establish the tunnel between the node and the global root node.
4. The method of claim 1, wherein the node type of the node is a non-root node;
alternatively, the first and second electrodes may be,
the node type of the node comprises one or more of the following combinations:
a subnet root node of a subnet where the node is located;
a site root node of a site where the node is located;
a global root node.
5. The device for multicast replication in the Overlay network is characterized in that the device is applied to a node connected with a multicast source in the Overlay network; the Overlay network comprises at least one site, and each site comprises at least one subnet; after receiving a notification for establishing a direct connection tunnel issued by an SDN controller based on the node types and a network topology structure, each node in the Overlay network is connected in a tree structure to form a multicast tree, wherein the multicast tree is as follows: the root node of the multicast tree is a global root node, and the global root node establishes a direct connection tunnel with the site root node of each site respectively; each site root node and a subnet root node of each subnet included by the site establish a direct connection tunnel; each subnet root node and a non-root node in the subnet establish a direct connection tunnel;
when the multicast source is located in a first subnet of a first site and the multicast member is located in a second subnet of the first site, the apparatus includes:
a receiving unit, configured to receive a multicast packet sent by a multicast source;
a determining unit, configured to determine, according to the multicast tree, a multicast forwarding path for forwarding the multicast packet;
and the forwarding unit is used for forwarding the multicast message based on the multicast forwarding path.
6. The apparatus according to claim 5, wherein the determining unit is specifically configured to determine, according to the multicast tree, a first subnet root node of the first subnet, a first site root node that is connected to the first subnet root node and belongs to a first site, a second subnet root node that is connected to the first site root node and belongs to a second subnet, and a node that is directly connected to a multicast member;
determining that the nodes on the multicast forwarding path are sequentially: the first subnet root node, the first site root node, the second subnet root node, a node directly connected with a multicast member, and the multicast member.
7. The apparatus of claim 5, wherein the multicast tree in the Overlay network is established by:
an SDN controller in the Overlay network determines the type of each node, and if the node is not a subnet root node and a subnet root node tunnel is not established, the SDN controller informs the node of establishing the tunnel between the node and the subnet root node; judging whether the node is a subnet root node and a tunnel from the node to a site root node is not established, and informing the node of establishing the tunnel between the node and the site root node; and if the node is the site root node and the tunnel from the node to the global root node is not established, the node is informed to establish the tunnel between the node and the global root node.
8. The apparatus of claim 5, wherein the node type of the node is a non-root node;
alternatively, the first and second electrodes may be,
the node type of the node comprises one or more of the following combinations:
a subnet root node of a subnet where the node is located;
a site root node of a site where the node is located;
a global root node.
CN201811591207.5A 2015-09-28 2015-09-28 Method and device for multicast replication in Overlay network Active CN109660442B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811591207.5A CN109660442B (en) 2015-09-28 2015-09-28 Method and device for multicast replication in Overlay network

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811591207.5A CN109660442B (en) 2015-09-28 2015-09-28 Method and device for multicast replication in Overlay network
CN201510628198.2A CN105162704B (en) 2015-09-28 2015-09-28 The method and device of multicast replication in Overlay network

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201510628198.2A Division CN105162704B (en) 2015-09-28 2015-09-28 The method and device of multicast replication in Overlay network

Publications (2)

Publication Number Publication Date
CN109660442A CN109660442A (en) 2019-04-19
CN109660442B true CN109660442B (en) 2021-04-27

Family

ID=54803463

Family Applications (4)

Application Number Title Priority Date Filing Date
CN201510628198.2A Active CN105162704B (en) 2015-09-28 2015-09-28 The method and device of multicast replication in Overlay network
CN201811591183.3A Active CN109660441B (en) 2015-09-28 2015-09-28 Method and device for multicast replication in Overlay network
CN201811591207.5A Active CN109660442B (en) 2015-09-28 2015-09-28 Method and device for multicast replication in Overlay network
CN201811590077.3A Active CN109561033B (en) 2015-09-28 2015-09-28 Method and device for multicast replication in Overlay network

Family Applications Before (2)

Application Number Title Priority Date Filing Date
CN201510628198.2A Active CN105162704B (en) 2015-09-28 2015-09-28 The method and device of multicast replication in Overlay network
CN201811591183.3A Active CN109660441B (en) 2015-09-28 2015-09-28 Method and device for multicast replication in Overlay network

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201811590077.3A Active CN109561033B (en) 2015-09-28 2015-09-28 Method and device for multicast replication in Overlay network

Country Status (1)

Country Link
CN (4) CN105162704B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106301941B (en) * 2016-08-29 2019-08-27 广州西麦科技股份有限公司 Mono- key dispositions method of Overlay and system
CN106411732B (en) * 2016-09-07 2020-11-20 新华三技术有限公司 Message forwarding method and device
CN108011800B (en) * 2016-10-31 2020-12-08 中国电信股份有限公司 Virtual extensible local area network VXLAN deployment method and VXLAN gateway
CN109327375B (en) * 2017-08-01 2021-04-30 中国电信股份有限公司 Method, device and system for establishing VXLAN tunnel
CN107872385B (en) * 2017-10-11 2020-10-23 中国电子科技集团公司第三十研究所 SDN network routing calculation and control method
CN109412978A (en) * 2018-10-17 2019-03-01 郑州云海信息技术有限公司 A kind of unicast method, virtual switch, SDN controller and storage medium
CN110445889B (en) * 2019-09-20 2020-06-02 中国海洋大学 Method and system for managing IP address of switch under Ethernet environment
CN112468612A (en) * 2020-11-30 2021-03-09 蔡俊龙 NAT penetration control method and system
CN113452551B (en) * 2021-06-11 2022-07-08 烽火通信科技股份有限公司 VXLAN tunnel topology monitoring method, device, equipment and storage medium
CN113365253B (en) * 2021-06-15 2023-10-03 上海高仙自动化科技发展有限公司 Node communication method, device, equipment, system and storage medium in network
CN114285679A (en) * 2021-12-09 2022-04-05 武汉船舶通信研究所(中国船舶重工集团公司第七二二研究所) Method and system for realizing heterogeneous network multicast based on centralized control
CN115051890A (en) * 2022-05-20 2022-09-13 中国电信股份有限公司 Message processing method, system, device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104115453A (en) * 2013-12-31 2014-10-22 华为技术有限公司 Method and device for achieving virtual machine communication
CN104253698A (en) * 2013-06-29 2014-12-31 华为技术有限公司 Message multicast processing method and message multicast processing equipment
CN104702476A (en) * 2013-12-05 2015-06-10 华为技术有限公司 Distributed gateway, message processing method and message processing device based on distributed gateway
CN104871495A (en) * 2012-09-26 2015-08-26 华为技术有限公司 Overlay virtual gateway for overlay networks

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100379226C (en) * 2004-12-14 2008-04-02 华为技术有限公司 Virtual special network multicast method by virtual router mode
CN1622551A (en) * 2004-12-15 2005-06-01 中国科学院计算机网络信息中心 Internal service system of layered type switching network and management control method thereof
CN100421407C (en) * 2005-11-22 2008-09-24 中国科学院计算机网络信息中心 Separating and merging IPv6 address space of switching network in hierarchy mode
CN1996931B (en) * 2005-12-31 2010-09-22 迈普通信技术股份有限公司 Network multicast method
CN101237393B (en) * 2007-01-30 2012-08-22 华为技术有限公司 A method and device and system for realizing quick multicast service switch
CN102195855B (en) * 2010-03-17 2014-10-08 华为技术有限公司 Business routing method and business network
CN102739501B (en) * 2011-04-01 2017-12-12 中兴通讯股份有限公司 Message forwarding method and system in two three layer virtual private networks
CN102316030B (en) * 2011-09-01 2014-04-09 杭州华三通信技术有限公司 Method for realizing two-layer internetworking of data center and device
CN102571616B (en) * 2012-03-16 2017-09-19 中兴通讯股份有限公司 Merging, dividing method, tunnel-associated device and the router in tunnel
US10135687B2 (en) * 2014-01-06 2018-11-20 Lenovo Enterprise Solutions (Singapore) Pte. Ltd. Virtual group policy based filtering within an overlay network
CN104301251B (en) * 2014-09-22 2018-04-27 新华三技术有限公司 A kind of QoS processing methods, system and equipment
CN104702480B (en) * 2015-03-24 2018-10-02 华为技术有限公司 The method and apparatus that protecting tunnel group is established in next-generation multicasting virtual private network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104871495A (en) * 2012-09-26 2015-08-26 华为技术有限公司 Overlay virtual gateway for overlay networks
CN104253698A (en) * 2013-06-29 2014-12-31 华为技术有限公司 Message multicast processing method and message multicast processing equipment
CN104702476A (en) * 2013-12-05 2015-06-10 华为技术有限公司 Distributed gateway, message processing method and message processing device based on distributed gateway
CN104115453A (en) * 2013-12-31 2014-10-22 华为技术有限公司 Method and device for achieving virtual machine communication

Also Published As

Publication number Publication date
CN109660441B (en) 2021-05-28
CN109561033B (en) 2021-08-24
CN109660441A (en) 2019-04-19
CN109660442A (en) 2019-04-19
CN105162704B (en) 2019-01-25
CN109561033A (en) 2019-04-02
CN105162704A (en) 2015-12-16

Similar Documents

Publication Publication Date Title
CN109660442B (en) Method and device for multicast replication in Overlay network
EP3210348B1 (en) Multicast traffic management in an overlay network
US10263808B2 (en) Deployment of virtual extensible local area network
US20200396162A1 (en) Service function chain sfc-based communication method, and apparatus
CN105471744B (en) A kind of virtual machine migration method and device
CN108964940B (en) Message sending method and device and storage medium
CN110324165B (en) Network equipment management method, device and system
US20170264496A1 (en) Method and device for information processing
CN105577502B (en) Service transmission method and device
CN102055665B (en) OSPF point-to-multipoint over broadcast or NBMA mode
US9590824B1 (en) Signaling host move in dynamic fabric automation using multiprotocol BGP
EP3069471B1 (en) Optimized multicast routing in a clos-like network
CN107317768B (en) Traffic scheduling method and device
EP3402130B1 (en) Information transmission method and device
CN102970231B (en) Multicast data flow forwards implementation method and route-bridge(RB)
JP2018536345A (en) Firewall cluster
CN107040441B (en) Cross-data-center data transmission method, device and system
US11362954B2 (en) Tunneling inter-domain stateless internet protocol multicast packets
CN108632147B (en) Message multicast processing method and device
WO2018068588A1 (en) Method and software-defined networking (sdn) controller for providing multicast service
US20190215191A1 (en) Deployment Of Virtual Extensible Local Area Network
US9438475B1 (en) Supporting relay functionality with a distributed layer 3 gateway
WO2015081785A1 (en) Method and device for virtualized access
CN113037883B (en) Method and device for updating MAC address table entries
CN112953832A (en) Method and device for processing MAC address table items

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant