WO2016155589A1 - Forwarding multicast packets - Google Patents

Forwarding multicast packets Download PDF

Info

Publication number
WO2016155589A1
WO2016155589A1 PCT/CN2016/077480 CN2016077480W WO2016155589A1 WO 2016155589 A1 WO2016155589 A1 WO 2016155589A1 CN 2016077480 W CN2016077480 W CN 2016077480W WO 2016155589 A1 WO2016155589 A1 WO 2016155589A1
Authority
WO
WIPO (PCT)
Prior art keywords
entry
packet
multicast
umgid
tenant
Prior art date
Application number
PCT/CN2016/077480
Other languages
French (fr)
Inventor
Ju Wang
Dehan Yan
Original Assignee
Hangzhou H3C Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou H3C Technologies Co., Ltd. filed Critical Hangzhou H3C Technologies Co., Ltd.
Publication of WO2016155589A1 publication Critical patent/WO2016155589A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4641Virtual LANs, VLANs, e.g. virtual private networks [VPN]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0813Configuration setting characterised by the conditions triggering a change of settings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0895Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1886Arrangements for providing special services to substations for broadcast or conference, e.g. multicast with traffic restrictions for efficiency improvement, e.g. involving subnets or subdomains
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/16Multipoint routing

Definitions

  • VXLAN Virtual eXtensible Local Area Network
  • IP network a layer 2 VPN technique based on IP network and using “MAC in UDP” encapsulation.
  • VXLAN enables a layer 2 virtual network to overlay a layer 3 physical network.
  • the layer 2 virtual network is referred to as an overlay network
  • the layer 3 physical network is referred to as an underlay network.
  • FIG. 1 is a schematic diagram illustrating a network in accordance with examples of the present disclosure
  • FIG. 2 is a schematic diagram illustrating a device in accordance with examples of the present disclosure
  • Fig. 3 is a flowchart illustrating a method of forwarding multicast packets in accordance with examples of the present disclosure
  • FIG. 4 is a flowchart illustrating a method of a vSwitch obtaining a category-1 entry in accordance with examples of the present disclosure
  • FIG. 5 is a flowchart illustrating a method of a vSwitch deleting a category-1 entry in accordance with examples of the present disclosure
  • FIG. 6 is a flowchart illustrating a method of a vSwitch deleting a category-2 entry in accordance with examples of the present disclosure
  • FIG. 7 is a flowchart illustrating a method of joining a multicast group in accordance with examples of the present disclosure
  • FIG. 8 is a flowchart illustrating a method of leaving a multicast group in accordance with examples of the present disclosure.
  • FIG. 9 is a schematic diagram illustrating an apparatus of forwarding multicast packets in accordance with examples of the present disclosure.
  • Various examples of the present disclosure may implement multicast in an overlay network by using a Software Defined Network (SDN) and an underlay network.
  • SDN Software Defined Network
  • SDN separates the control plane of network devices from the data plane.
  • SDN may include: an SDN controller and switching devices.
  • the switching device may be a virtual switch (vSwitch) .
  • FIG. 1 is a schematic diagram illustrating a network 10 in accordance with examples of the present disclosure.
  • the network 10 may include an overlay network and an underlay network.
  • the overlay network may be implemented using SDN.
  • the overlay network may include an SDN controller and plural vSwitches, such as vSwitch 1, vSwitch 2 and vSwitch n as shown in FIG. 1.
  • the vSwitches may be configured to be VXLAN tunnel end points (VTEP) .
  • each vSwitch may store a set of category-1 entries (C1 entries) and a set of category-2 entries (C2 entries) for forwarding multicast packets.
  • a C1 entry or a C2 entry may comprise a tenant identity, an overlay network multicast group identity, an underlay network multicast group identity and an egress port.
  • FIG. 2 is a schematic diagram illustrating a device 20 in accordance with examples of the present disclosure.
  • the device 20 may implement functions of a vSwitch.
  • the device 20 may include a processor 202, a non-transitory storage medium, e.g., a memory 204, a network communication interface 208, and an internal bus 210 connecting the components.
  • the memory 204 may include logical instructions for controlling forwarding of multicast packets. The logical instructions may cause the processor 202 to:
  • VM virtual machine
  • uMGID first underlay network multicast group identity
  • oMGID first overlay network multicast group identity
  • a vSwitch can forward multicast packets using a category-1 entry and a category-2 entry, thereby implement multicast in an overlay network by using an underlay network.
  • FIG. 3 is a flowchart illustrating a method 30 of forwarding multicast packets in accordance with examples of the present disclosure. The method 30 may include the following procedures.
  • a vSwitch may receive a first multicast packet from an underlay network, determine a first VM corresponding to a first uMGID and a first oMGID in the first multicast packet by using a category-1 multicast entry (simply referred to as C1 entry) , perform VXLAN decapsulation on the first multicast packet and send the decapsulated first multicast packet to the first VM.
  • a category-1 multicast entry implies referred to as C1 entry
  • the vSwitch may receive a second multicast packet from a second VM, determine a second uMGID corresponding to a tenant ID of a tenant to which the second VM belongs and a second oMGID in the second multicast packet by using a category-2 multicast entry (simply referred to as C2 entry) , perform VXLAN encapsulation on the second multicast packet using the second uMGID, and forward the encapsulated second multicast packet to the underlay network.
  • a category-2 multicast entry implies referred to as C2 entry
  • the vSwitch may also determine a third VM corresponding to the tenant ID of the tenant of the second VM and the second oMGID in the second multicast packet by using a C1 entry, and forward the second multicast packet to the third VM if the third VM is not the second VM. For example, the vSwitch may search for a C1 entry which includes: the tenant ID of the tenant of the second VM and the second oMGID, and an egress port which is not the port from which the second multicast packet is received.
  • a C1 entry may include: a tenant ID, an oMGID, an uMGID, and an egress port.
  • the vSwitch may search for a C1 entry which includes the first uMGID and the first oMGID, and determine the first VM using the egress port in the C1 entry found.
  • the egress port is a port of the vSwitch which connects the vSwitch to the first VM.
  • a C2 entry may include: a tenant ID, an oMGID, an uMGID, and an egress port.
  • the vSwitch may search for a C2 entry which includes the tenant ID of the tenant of the second VM and the second oMGID, determine the second uMGID to be the uMGID in the C2 entry found, and forward the second multicast packet which was processed through VXLAN encapsulation to the underlay network through the egress port in the C2 entry.
  • the vSwitch may obtain from the second multicast packet the VM ID of the second VM that sent the second multicast packet.
  • static configuration in a vSwitch may include information of VMs connected to the vSwitch, e.g., a port of the vSwitch which connects the vSwitch to a VM, a VM ID of the VM, a tenant ID of a tenant to which the VM belongs, a VXLAN ID of a VXLAN to which the VM belongs, or the like. Therefore, after receiving a packet from a VM, the vSwitch may identify the VM using the port from which the packet is received, and determine the tenant ID of the tenant to which the VM belongs.
  • the C1 entries and the C2 entries may be obtained through different methods.
  • the vSwitch may obtain the C1 entries from the SDN controller.
  • FIG. 4 is a flowchart illustrating a method of a vSwitch obtaining a C1 entry in accordance with examples of the present disclosure. As shown in Fig. 4, the method 40 may include the following procedures.
  • a first multicast group join packet may be received from a first VM, and provided to the SDN controller which may search for a first uMGID corresponding to a tenant ID of a tenant to which the first VM belongs and a first oMGID in the first multicast group join packet and generate a C1 entry.
  • the multicast group join packet may include a group join message conforming to a multicast protocol, e.g., an Internet Group Management Protocol (IGMP) join message.
  • IGMP Internet Group Management Protocol
  • the C1 entry sent by the SDN controller may be received and stored.
  • the vSwitch may create a second multicast group join packet which may include the first uMGID, and send the second multicast group join packet through a port connected to the underlay network so as to join the underlay network multicast group.
  • the vSwitch may obtain the C2 entries from the SDN controller.
  • the vSwitch may receive a C2 entry sent by the SDN controller.
  • the C2 entry may be created by the SDN controller in response to a multicast group join packet sent by another vSwitch. That is, after receiving a multicast group join packet, the SDN controller may generate a relation which associates a tenant ID and an oMGID with an uMGID.
  • the relation may be sent to the vSwitch that sent the multicast group join packet in the form of a C1 entry, and be synchronized into other vSwitches in the form of a C2 entry.
  • the vSwitch may delete a C1 entry after receiving a multicast group leave packet from a VM.
  • FIG. 5 is a flowchart illustrating a method of a vSwitch deleting a C1 entry in accordance with examples of the present disclosure. As shown in Fig. 5, the method 50 may include the following procedures.
  • the multicast group leave packet may include a group leave message conforming to a multicast protocol, e.g., an Internet Group Management Protocol (IGMP) leave message.
  • IGMP Internet Group Management Protocol
  • a first multicast group leave packet may be received from a first VM, and provided to the SDN controller which may search for a first uMGID corresponding to a tenant ID of a tenant to which the first VM belongs and a first oMGID in the first multicast group leave packet and generate a C1 entry for deletion.
  • the first C1 entry for deletion sent by the SDN controller may be received.
  • the first C1 entry for deletion may be searched for in stored C1 entries, and the C1 entry found may be deleted.
  • the vSwitch may also search in remaining C1 entries for a C1 entry which includes the first uMGID in the deleted C1 entry. If no C1 entry is found, the vSwitch may create and send a second multicast group leave packet to leave the underlay network multicast group corresponding to the first uMGID.
  • the vSwitch may delete a C2 entry according to an instruction from the SDN controller.
  • FIG. 6 is a flowchart illustrating a method of a vSwitch deleting a C2 entry in accordance with examples of the present disclosure. As shown in Fig. 6, the method 60 may include the following procedures.
  • a C2 entry for deletion sent by the SDN controller may be received.
  • the C2 entry for deletion may be created by the SDN controller in response to a multicast group leave packet sent by another vSwitch.
  • the C2 entry for deletion may be searched for in stored C2 entries, and the C2 entry found may be deleted.
  • the SDN controller may generate a relation for deletion which associates a tenant ID and an oMGID with an uMGID.
  • the relation for deletion may be sent to the vSwitch that sent the multicast group leave packet in the form of a C1 entry for deletion, and may also be synchronized into other vSwitches in the form of a C2 entry for deletion.
  • the process of obtaining the C1 entry and the C2 entry may be described in the following with reference to a multicast group join process.
  • FIG. 7 is a flowchart illustrating a method of joining a multicast group in accordance with examples of the present disclosure. As shown in FIG. 7, the method 70 may include the following procedures.
  • a VM in an overlay network may send a multicast group join packet which includes an identity of an overlay network multicast group which the VM wants to join, and the identity may be denoted as G1.
  • a vSwitch e.g., vSwitch 1 as shown in FIG. 1, may receive the multicast group join packet sent by VM 11, and send the multicast group join packet to the SDN controller.
  • the SDN controller may receive the multicast group join packet sent by vSwitch 1, search for a relation which matches a tenant identity of a tenant to which VM 11 belongs and the oMGID in the multicast group join packet, perform the procedure in block 74 if the relation is not found, and perform the procedure in block 75 if the relation is found.
  • the SDN controller may manage VMs connected to each vSwitch, e.g., allocate a tenant ID, a VXLAN identity (VNI) or the like for each VM connected to each vSwitch. Therefore, in block 73, after receiving the multicast group join packet sent by vSwitch 1, the SDN controller may determine the tenant ID (e.g., virtual private cloud identity (VPCID) 1) of the tenant to which VM 11 belongs according to the VM ID in the multicast group join packet. Then the SDN controller may search in stored relations for a relation using the VPCID1 and G1 as keywords, perform the procedure in block 74 if the relation is not found, and perform the procedure in block 75 if the relation is found.
  • VPCID virtual private cloud identity
  • the SDN controller may create a relation which includes the tenant ID and the oMGID.
  • the relation created in block 74 may include: VPCID1, G1, an uMGID (denoted as G1’ ) corresponding to G1, and a counting value.
  • the counting value may be set to be a pre-determined value (e.g., the pre-determined value may be 1 according to an example) .
  • Table 1 shows an example of the relation created in block 74.
  • the tables illustrated herein are merely examples, and may include other information according to practical needs.
  • BM broadcast/multicast
  • the SDN controller may allocate an uMGID to an oMGID in the multicast group join packet using the BM application.
  • the SDN controller may select an unoccupied uMGID from pre-configured uMGIDs and allocate the selected uMGID to the oMGID.
  • oMGIDs pre-configured in the SDN controller may range from 1 to 100, and uMGIDs pre-configured in the SDN controller may range from 101 to 200.
  • Supposing the oMGID in the multicast group join packet is 10, an unoccupied uMGID may be selected from the range 101 to 200, e.g., 110, and allocated to oMGID 10.
  • the SDN controller may abide by the following rules when allocating the uMGID to the oMGID.
  • Rule 1 For multicast group join packets sent by VMs of different tenants, different uMGIDs may be allocated as long as the packets are sent by VMs from different tenants even if the packets include the same oMGID.
  • the SDN controller may receive a multicast group join packet sent by VM 11 of tenant 1, and the packet includes multicast group 1 as an oMGID.
  • the SDN controller may allocate multicast group 101 to the multicast group 1 of tenant 1.
  • the SDN controller may receive a multicast group join packet sent by VM 12 of tenant 2, and the packet includes multicast group 1 as an oMGID.
  • the SDN controller may allocate a different uMGID to multicast group 1 in the multicast group join packet sent by VM 12, and may allocate multicast group 102 or the like to multicast group 1 of tenant 2. As such, it can be ensured that different uMGIDs are allocated for multicast group join packets sent by VMs from different tenants even if the multicast group join packets include the same oMGID.
  • the same uMGID may be allocated for multicast group join packets which includes the same oMGID sent by different VMs of the same tenant even if the VMs have different VNI.
  • the SDN controller may receive a multicast group join packet sent by VM 1 of tenant 1, and the packet includes an oMGID of multicast group 1. Supposing the VNI of VM 1 is VNI 1, the SDN controller may allocate multicast group 101 as the uMGID for the multicast group 1. After that, SDN controller may receive a multicast group join packet sent by VM 21 of tenant 1, the packet includes an oMGID of multicast group 1, and the VNI of VM 21 is VNI 2. The SDN controller may allocate multicast group 101 as the uMGID allocated to multicast group 1 for the multicast group join packet sent by VM 21.
  • the SDN controller may allocate an uMGID to an oMGID in a multicast group join packet.
  • the SDN controller may make use of stored relations instead of creating a new relation which is the same with a stored relation.
  • the SDN controller may create a C1 entry using the VPCID 1, G1 and G1’ in the relation and the virtual port that connects the vSwitch 1 to VM 11 which serves as an egress port in the C1 entry, and send the C1 entry to the vSwitch 1.
  • the SDN controller may create a C2 entry using the VPCID 1, G1, G1’ and a port connecting the vSwitch to the underlay network which serves as an egress port in the C2 entry, and send the C2 entry to the vSwitch.
  • the SDN controller may send the C1 entry to the vSwitch 1 and a C2 entry to each vSwitch other than vSwitch 1 as flow table entries.
  • vSwitch 1 may receive and store the C1 entry sent by the SDN controller.
  • the C1 entry may include the information listed in block 76, e.g., an egress port, a tenant ID, an oMGID, and an uMGID.
  • Table 2 is an example of a C1 entry.
  • a vSwitch other than vSwitch 1 may receive and store a C2 entry sent by the SDN controller, and set a counting value corresponding to the C2 entry.
  • the counting value may be set to be a pre-determined value.
  • the C2 entry may include the information listed in block 76, e.g., a tenant ID, an oMGID, an uMGID and an egress port.
  • Table 3 illustrates an example of a C2 entry.
  • the method may also include:
  • the initial value of the counting value may be a pre-determined value.
  • each vSwitch in an SDN may obtain a C1 entry and a C2 entry.
  • each vSwitch may forward the multicast packets using respective C1 entries and C2 entries (block 79) .
  • the forwarding process may be as shown in FIG. 4, and not repeated herein.
  • FIG. 8 is a flowchart illustrating a method of leaving a multicast group in accordance with examples of the present disclosure. As shown in FIG. 8, the method 80 may include the following procedures.
  • a VM in an overlay network may send a multicast group leave packet which includes an identity of an overlay network multicast group which the VM wants to leave, and the identity may be denoted as G2.
  • vSwitch 2 connected with VM 21 may receive the multicast group leave packet and send the multicast group leave packet to the SDN controller.
  • the SDN controller may receive the multicast group leave packet sent by vSwitch 2, search in stored relations for a relation which includes: a tenant ID of the tenant of VM 21 (e.g., VPCID 2) , G2.
  • the relation may also include an uMGID (denoted by G2’ ) corresponding to G2.
  • the SDN controller may create a C1 entry for deletion using VPCID 2, G2, G2’ and a vport connecting vSwitch 2 to VM 21 which serves as an egress port, and send the C1 entry for deletion to vSwitch 2.
  • the SDN controller may create a C2 entry for deletion using VPCID 2, G2, G2’ and a port connecting the vSwitch to the underlay network which serves as an egress port, and send the C2 entry for deletion to the vSwitch.
  • the SDN controller may decrease a counting value in the relation by 1, judge whether the decreased counting value equals the pre-determined value, and delete the relation if the decreased counting value equals the pre-determined value.
  • the SDN controller may mark the uMGID in the relation as unoccupied, i.e., making the uMGID available for other oMGID.
  • vSwitch 2 may receive the C1 entry for deletion sent by the SDN controller, search for a C1 entry identical to the C1 entry for deletion and delete the C1 entry found. After that, vSwitch 2 may check whether vSwitch 2 stores a C1 entry which includes: the uMGID in the C1 entry for deletion, terminate the process if the C1 entry is found, or create a multicast group leave packet which includes the uMGID in the C1 entry for deletion to leave the underlay network multicast group if vSwitch 2 does not store a C1 entry which includes the uMGID.
  • a vSwitch other than vSwitch 2 may receive the C2 entry for deletion sent by the SDN controller, search for a C2 entry identical to the C2 entry for deletion. If a C2 entry identical to the C2 entry for deletion is found, the vSwitch may decrease a counting value corresponding to the found C2 entry by 1, and check whether the counting value equals the pre-determined value after 1 is deducted from the counting value. If the counting value equals the pre-determined value, the vSwitch may delete the C2 entry found. If the counting value does not equal the pre-determined value, the vSwitch may terminate the process.
  • FIG. 9 is a schematic diagram illustrating an apparatus 90 of forwarding multicast packets in accordance with examples of the present disclosure.
  • the apparatus 90 may include: a sending unit 91, a C1 entry unit 92, a C2 entry unit 93, and a packet unit 94.
  • the sending unit 91 may send a multicast group join packet from a VM to an SDN controller.
  • the C1 entry unit 92 may receive and store a C1 entry sent by the SDN controller in response to the multicast group join packet.
  • the C1 entry may include: an egress port, a VNI of a VXLAN that the egress port belongs to, a tenant ID, an oMGID, and an uMGID.
  • the C1 entry unit 92 may create a multicast group join packet corresponding to the uMGID to join a multicast group corresponding to the uMGID.
  • the C2 entry unit 93 may receive and store a C2 entry sent by the SDN controller in response to a multicast group join packet sent by another vSwitch, and set a counting value corresponding to the C2 entry.
  • the counting value is set to be a pre-determined value.
  • the C2 entry may include a tenant ID, an oMGID, an uMGID, and an egress port.
  • the packet unit 94 may receive a multicast packet from a VM, search in stored C1 entries and C2 entries for an entry for forwarding the multicast packet, and forward the multicast packet using an entry found.
  • the packet unit 94 may also receive a multicast packet from the underlay network, search in the stored C1 entries for an entry for forwarding the multicast packet, and forward the multicast packet using an entry found.
  • the C2 entry unit 93 may search stored C2 entries for an entry identical to the received C2 entry, increase a counting value corresponding to the found entry by 1 if the entry identical to the received C2 entry is found, or perform the procedure of storing the C2 entry and setting the counting value corresponding to the C2 entry if the entry identical to the received C2 entry is not found.
  • the sending unit 91 may send a multicast group leave packet received from a VM to the SDN controller.
  • the C1 entry 92 may also receive a C1 entry for deletion sent by the SDN controller in response to a multicast group leave packet sent by the vSwitch, search in stored C1 entries for a C1 entry identical to the received C1 entry for deletion, and delete the C1 entry found.
  • the C1 entry 92 may also check whether remaining C1 entries include a C1 entry which includes: the uMGID in the received C1 entry for deletion.
  • the C1 entry unit 92 may terminate the process. If the remaining C1 entries do not include a C1 entry which includes the uMGID, the C1 entry unit 92 may create a multicast group leave packet which includes the uMGID in the C1 entry for deletion to leave the multicast group in the underlay network corresponding to the uMGID in the C1 entry for deletion.
  • the C2 entry unit 93 may receive a C2 entry for deletion sent by the SDN controller in response to a multicast group leave packet sent by another vSwitch.
  • the C2 entry unit 93 may search in stored C2 entries for a C2 entry identical to the received C2 entry for deletion.
  • the C2 entry unit 93 may decrease a counting value corresponding to the found C2 entry by 1, and check whether the counting value equals the pre-determined value after 1 is deducted from the counting value. If the counting value equals the pre-determined value, the C2 entry unit 93 may delete the C2 entry found. If the counting value does not equal the pre-determined value, the C2 entry unit 93 may terminate the process.
  • the packet unit 94 may search in stored C1 entries and C2 entries for an entry for forwarding a multicast packet, and forward the multicast packet using the found entry.
  • the process of forwarding the multicast packet may include the following procedures.
  • the packet unit 94 may search in the stored C1 entries for an entry which includes: a tenant ID corresponding to the multicast packet, an oMGID in the multicast packet. If the C1 entry is found and an egress port in the C1 entry is not the port through which the multicast packet is received, the packet unit 94 may forward the multicast packet according to a VNI and the egress port in the C1 entry.
  • the packet unit 94 may search in stored C2 entries for a C2 entry which includes: a tenant ID corresponding to the multicast packet, an oMGID in the multicast packet. If the C2 entry is found, the packet unit 94 may forward the multicast packet using an uMGID and an egress in the C2 entry.
  • the packet unit 94 may search in the stored C1 entries for an entry which includes: an uMGID, and forward the multicast packet using an oMGID, a VNI and an egress port in the found C1 entry if the C1 entry is found.

Abstract

In an example, after receiving a first multicast packet from an underlay network, a virtual switch (vSwitch) determines a first virtual machine (VM) corresponding to a first underlay network multicast group identity and a first overlay network multicast group identity in the first multicast packet by using a category-1 entry, performs VXLAN decapsulation on the first multicast packet and sends the decapsulated first multicast packet to the first VM. After receiving a second multicast packet from a second VM, the vSwitch determines a second underlay network multicast group identity corresponding to a tenant ID of a tenant to which the second VM belongs and a second overlay network multicast group identity in the second multicast packet by using a category-2 entry, performs VXLAN encapsulation on the second multicast packet using the second underlay network multicast group identity, and forwards the encapsulated second multicast packet to the underlay network.

Description

FORWARDING MULTICAST PACKETS Background
Virtual eXtensible Local Area Network (VXLAN) is a layer 2 VPN technique based on IP network and using “MAC in UDP” encapsulation. VXLAN enables a layer 2 virtual network to overlay a layer 3 physical network. The layer 2 virtual network is referred to as an overlay network, and the layer 3 physical network is referred to as an underlay network.
Brief Description of the Drawings
Features of the present disclosure are illustrated by way of example and not limited in the following figures.
FIG. 1 is a schematic diagram illustrating a network in accordance with examples of the present disclosure;
FIG. 2 is a schematic diagram illustrating a device in accordance with examples of the present disclosure;
Fig. 3 is a flowchart illustrating a method of forwarding multicast packets in accordance with examples of the present disclosure;
FIG. 4 is a flowchart illustrating a method of a vSwitch obtaining a category-1 entry in accordance with examples of the present disclosure;
FIG. 5 is a flowchart illustrating a method of a vSwitch deleting a category-1 entry in accordance with examples of the present disclosure;
FIG. 6 is a flowchart illustrating a method of a vSwitch deleting a category-2 entry in accordance with examples of the present disclosure;
FIG. 7 is a flowchart illustrating a method of joining a multicast group in accordance with examples of the present disclosure;
FIG. 8 is a flowchart illustrating a method of leaving a multicast group in accordance with examples of the present disclosure; and
FIG. 9 is a schematic diagram illustrating an apparatus of forwarding multicast packets in accordance with examples of the present disclosure.
Detailed Descriptions
Various examples of the present disclosure may implement multicast in an overlay network by using a Software Defined Network (SDN) and an underlay network.
SDN separates the control plane of network devices from the data plane. SDN may include: an SDN controller and switching devices. According to some examples, the switching device may be a virtual switch (vSwitch) .
FIG. 1 is a schematic diagram illustrating a network 10 in accordance with examples of the present disclosure. The network 10 may include an overlay network and an underlay network.
According to various examples, the overlay network may be implemented using SDN. The overlay network may include an SDN controller and plural vSwitches, such as vSwitch 1, vSwitch 2 and vSwitch n as shown in FIG. 1. In an example, the vSwitches may be configured to be VXLAN tunnel end points (VTEP) . In various examples, each vSwitch may store a set of category-1 entries (C1 entries) and a set of category-2 entries (C2 entries) for forwarding multicast packets. In an example, a C1 entry or a C2 entry may comprise a tenant identity, an overlay network multicast group identity, an underlay network multicast group identity and an egress port.
FIG. 2 is a schematic diagram illustrating a device 20 in accordance with examples of the present disclosure. The device 20 may implement functions of a vSwitch. As shown in FIG. 2, the device 20 may include a processor 202, a non-transitory storage medium, e.g., a memory 204, a network communication interface 208, and an internal bus 210 connecting the components. The memory 204 may include logical instructions for controlling forwarding of multicast packets. The logical instructions may cause the processor 202 to:
after receiving a first multicast packet from an underlay network, determine a first virtual machine (VM) corresponding to a first underlay network multicast group identity (simply referred to as uMGID) and a first overlay network multicast group identity (simply referred to as oMGID) in the first multicast packet by using a category-1 entry, perform VXLAN decapsulation on the first multicast packet and forward the decapsulated first multicast packet to the first VM;
after receiving a second multicast packet from a second VM, determine a second uMGID corresponding to a tenant ID of a tenant to which the second VM belongs and a second oMGID in the second multicast packet by using a category-2 entry, perform VXLAN encapsulation on the second multicast packet using the second uMGID, and forward the encapsulated second multicast packet to the underlay network.
According to various examples, a vSwitch can forward multicast packets using a category-1 entry and a category-2 entry, thereby implement multicast in an overlay network by using an underlay network. FIG. 3 is a flowchart illustrating a method 30 of forwarding multicast packets in accordance with examples of the present disclosure. The method 30 may include the following procedures.
At block 31, a vSwitch may receive a first multicast packet from an underlay network, determine a first VM corresponding to a first uMGID and a first oMGID in the first multicast packet by using a category-1 multicast entry (simply referred to as C1 entry) , perform VXLAN decapsulation on the first multicast packet and send the decapsulated first multicast packet to the first VM.
At block 32, the vSwitch may receive a second multicast packet from a second VM, determine a second uMGID corresponding to a tenant ID of a tenant to which the second VM belongs and a second oMGID in the second multicast packet by using a category-2 multicast entry (simply referred to as C2 entry) , perform VXLAN encapsulation on the second multicast packet using the second uMGID, and forward the encapsulated second multicast packet to the underlay network.
According to some examples, after receiving the second multicast packet from the second VM, the vSwitch may also determine a third VM corresponding to the tenant ID of the tenant of the second VM and the second oMGID in the second multicast packet by using a C1 entry, and forward the second multicast packet to the third VM if the third VM is not the second VM. For example, the vSwitch may search for a C1 entry which includes: the tenant ID of the tenant of the second VM and the second oMGID, and an egress port which is not the port from which the second multicast packet is received.
In an example, a C1 entry may include: a tenant ID, an oMGID, an uMGID, and an egress port. The vSwitch may search for a C1 entry which includes the first  uMGID and the first oMGID, and determine the first VM using the egress port in the C1 entry found. In an example, the egress port is a port of the vSwitch which connects the vSwitch to the first VM.
According to some examples, a C2 entry may include: a tenant ID, an oMGID, an uMGID, and an egress port. The vSwitch may search for a C2 entry which includes the tenant ID of the tenant of the second VM and the second oMGID, determine the second uMGID to be the uMGID in the C2 entry found, and forward the second multicast packet which was processed through VXLAN encapsulation to the underlay network through the egress port in the C2 entry.
In an example, the vSwitch may obtain from the second multicast packet the VM ID of the second VM that sent the second multicast packet. According to some examples, static configuration in a vSwitch may include information of VMs connected to the vSwitch, e.g., a port of the vSwitch which connects the vSwitch to a VM, a VM ID of the VM, a tenant ID of a tenant to which the VM belongs, a VXLAN ID of a VXLAN to which the VM belongs, or the like. Therefore, after receiving a packet from a VM, the vSwitch may identify the VM using the port from which the packet is received, and determine the tenant ID of the tenant to which the VM belongs.
The C1 entries and the C2 entries may be obtained through different methods.
According to various examples, the vSwitch may obtain the C1 entries from the SDN controller. FIG. 4 is a flowchart illustrating a method of a vSwitch obtaining a C1 entry in accordance with examples of the present disclosure. As shown in Fig. 4, the method 40 may include the following procedures.
At block 41, a first multicast group join packet may be received from a first VM, and provided to the SDN controller which may search for a first uMGID corresponding to a tenant ID of a tenant to which the first VM belongs and a first oMGID in the first multicast group join packet and generate a C1 entry. In an example, the multicast group join packet may include a group join message conforming to a multicast protocol, e.g., an Internet Group Management Protocol (IGMP) join message.
At block 42, the C1 entry sent by the SDN controller may be received and stored.
According to an example, after receiving the first multicast group join packet from the first VM, the vSwitch may create a second multicast group join packet which may include the first uMGID, and send the second multicast group join packet through a port connected to the underlay network so as to join the underlay network multicast group.
According to various examples, the vSwitch may obtain the C2 entries from the SDN controller. The vSwitch may receive a C2 entry sent by the SDN controller. The C2 entry may be created by the SDN controller in response to a multicast group join packet sent by another vSwitch. That is, after receiving a multicast group join packet, the SDN controller may generate a relation which associates a tenant ID and an oMGID with an uMGID. The relation may be sent to the vSwitch that sent the multicast group join packet in the form of a C1 entry, and be synchronized into other vSwitches in the form of a C2 entry.
According to various examples, the vSwitch may delete a C1 entry after receiving a multicast group leave packet from a VM. FIG. 5 is a flowchart illustrating a method of a vSwitch deleting a C1 entry in accordance with examples of the present disclosure. As shown in Fig. 5, the method 50 may include the following procedures. In an example, the multicast group leave packet may include a group leave message conforming to a multicast protocol, e.g., an Internet Group Management Protocol (IGMP) leave message.
At block 51, a first multicast group leave packet may be received from a first VM, and provided to the SDN controller which may search for a first uMGID corresponding to a tenant ID of a tenant to which the first VM belongs and a first oMGID in the first multicast group leave packet and generate a C1 entry for deletion.
At block 52, the first C1 entry for deletion sent by the SDN controller may be received.
At block 53, the first C1 entry for deletion may be searched for in stored C1 entries, and the C1 entry found may be deleted.
In some examples, after deleting the C1 entry, the vSwitch may also search in remaining C1 entries for a C1 entry which includes the first uMGID in the deleted C1 entry. If no C1 entry is found, the vSwitch may create and send a second multicast  group leave packet to leave the underlay network multicast group corresponding to the first uMGID.
According to various examples, the vSwitch may delete a C2 entry according to an instruction from the SDN controller. FIG. 6 is a flowchart illustrating a method of a vSwitch deleting a C2 entry in accordance with examples of the present disclosure. As shown in Fig. 6, the method 60 may include the following procedures.
At block 61, a C2 entry for deletion sent by the SDN controller may be received. The C2 entry for deletion may be created by the SDN controller in response to a multicast group leave packet sent by another vSwitch.
At block 62, the C2 entry for deletion may be searched for in stored C2 entries, and the C2 entry found may be deleted.
That is, after receiving a multicast group leave packet, the SDN controller may generate a relation for deletion which associates a tenant ID and an oMGID with an uMGID. The relation for deletion may be sent to the vSwitch that sent the multicast group leave packet in the form of a C1 entry for deletion, and may also be synchronized into other vSwitches in the form of a C2 entry for deletion.
The process of obtaining the C1 entry and the C2 entry may be described in the following with reference to a multicast group join process.
FIG. 7 is a flowchart illustrating a method of joining a multicast group in accordance with examples of the present disclosure. As shown in FIG. 7, the method 70 may include the following procedures.
At block 71, a VM in an overlay network, e.g., VM 11 in FIG. 1, may send a multicast group join packet which includes an identity of an overlay network multicast group which the VM wants to join, and the identity may be denoted as G1.
At block 72, a vSwitch, e.g., vSwitch 1 as shown in FIG. 1, may receive the multicast group join packet sent by VM 11, and send the multicast group join packet to the SDN controller.
At block 73, the SDN controller may receive the multicast group join packet sent by vSwitch 1, search for a relation which matches a tenant identity of a tenant to which VM 11 belongs and the oMGID in the multicast group join packet, perform the  procedure in block 74 if the relation is not found, and perform the procedure in block 75 if the relation is found.
The SDN controller may manage VMs connected to each vSwitch, e.g., allocate a tenant ID, a VXLAN identity (VNI) or the like for each VM connected to each vSwitch. Therefore, in block 73, after receiving the multicast group join packet sent by vSwitch 1, the SDN controller may determine the tenant ID (e.g., virtual private cloud identity (VPCID) 1) of the tenant to which VM 11 belongs according to the VM ID in the multicast group join packet. Then the SDN controller may search in stored relations for a relation using the VPCID1 and G1 as keywords, perform the procedure in block 74 if the relation is not found, and perform the procedure in block 75 if the relation is found.
At block 74, the SDN controller may create a relation which includes the tenant ID and the oMGID.
In an example, the relation created in block 74 may include: VPCID1, G1, an uMGID (denoted as G1’ ) corresponding to G1, and a counting value. The counting value may be set to be a pre-determined value (e.g., the pre-determined value may be 1 according to an example) .
Table 1 shows an example of the relation created in block 74.
tenant ID oMGID uMGID counting value
      pre-determined value
Table 1
According to various examples, the tables illustrated herein are merely examples, and may include other information according to practical needs.
In various examples, there may be various applications running on the SDN controller. One of the applications, which is generally referred to as broadcast/multicast (BM) application, may implement centralized management of multicast groups in the overlay network and multicast groups in the underlay network. In the procedure of block 74, the SDN controller may allocate an uMGID to an oMGID in the multicast group join packet using the BM application.
The SDN controller may select an unoccupied uMGID from pre-configured uMGIDs and allocate the selected uMGID to the oMGID.
For example, oMGIDs pre-configured in the SDN controller may range from 1 to 100, and uMGIDs pre-configured in the SDN controller may range from 101 to 200. Supposing the oMGID in the multicast group join packet is 10, an unoccupied uMGID may be selected from the range 101 to 200, e.g., 110, and allocated to oMGID 10.
According an example, the SDN controller may abide by the following rules when allocating the uMGID to the oMGID.
Rule 1: For multicast group join packets sent by VMs of different tenants, different uMGIDs may be allocated as long as the packets are sent by VMs from different tenants even if the packets include the same oMGID.
For example, the SDN controller may receive a multicast group join packet sent by VM 11 of tenant 1, and the packet includes multicast group 1 as an oMGID. The SDN controller may allocate multicast group 101 to the multicast group 1 of tenant 1. After that, the SDN controller may receive a multicast group join packet sent by VM 12 of tenant 2, and the packet includes multicast group 1 as an oMGID. The SDN controller may allocate a different uMGID to multicast group 1 in the multicast group join packet sent by VM 12, and may allocate multicast group 102 or the like to multicast group 1 of tenant 2. As such, it can be ensured that different uMGIDs are allocated for multicast group join packets sent by VMs from different tenants even if the multicast group join packets include the same oMGID.
Rule 2: The same uMGID may be allocated for multicast group join packets which includes the same oMGID sent by different VMs of the same tenant even if the VMs have different VNI.
For example, the SDN controller may receive a multicast group join packet sent by VM 1 of tenant 1, and the packet includes an oMGID of multicast group 1. Supposing the VNI of VM 1 is VNI 1, the SDN controller may allocate multicast group 101 as the uMGID for the multicast group 1. After that, SDN controller may receive a multicast group join packet sent by VM 21 of tenant 1, the packet includes an oMGID of multicast group 1, and the VNI of VM 21 is VNI 2. The SDN controller may allocate multicast group 101 as the uMGID allocated to multicast group 1 for the multicast group join packet sent by VM 21.
According to the above two rules, the SDN controller may allocate an uMGID to an oMGID in a multicast group join packet.
At block 75, the counting value in the found relation is increased by 1.
The SDN controller may make use of stored relations instead of creating a new relation which is the same with a stored relation.
At block 76, subsequent to block 74 or 75, the SDN controller may create a C1 entry using the VPCID 1, G1 and G1’ in the relation and the virtual port that connects the vSwitch 1 to VM 11 which serves as an egress port in the C1 entry, and send the C1 entry to the vSwitch 1. For each vSwitch other than the vSwitch 1, the SDN controller may create a C2 entry using the VPCID 1, G1, G1’ and a port connecting the vSwitch to the underlay network which serves as an egress port in the C2 entry, and send the C2 entry to the vSwitch.
The SDN controller may send the C1 entry to the vSwitch 1 and a C2 entry to each vSwitch other than vSwitch 1 as flow table entries.
At block 77, subsequent to block 76, vSwitch 1 may receive and store the C1 entry sent by the SDN controller.
The C1 entry may include the information listed in block 76, e.g., an egress port, a tenant ID, an oMGID, and an uMGID. Table 2 is an example of a C1 entry.
egress port tenant ID oMGID uMGID
       
Table 2
At block 78, subsequent to block 76, a vSwitch other than vSwitch 1 may receive and store a C2 entry sent by the SDN controller, and set a counting value corresponding to the C2 entry. The counting value may be set to be a pre-determined value.
The C2 entry may include the information listed in block 76, e.g., a tenant ID, an oMGID, an uMGID and an egress port. Table 3 illustrates an example of a C2 entry.
tenant ID oMGID uMGID egress port
       
Table 3
In an example, in order to avoid creating an entry which is the same with a stored entry, before storing the C2 entry in block 78, the method may also include:
searching for a stored C2 entry which is the same with the received C2 entry;
increasing the counting value corresponding to the found C2 entry by 1 if the C2 entry is found; and
storing the received C2 entry and setting a counting value corresponding to the C2 entry if the C2 entry is not found. The initial value of the counting value may be a pre-determined value.
The above are examples of how a vSwitch in an SDN may obtain a C1 entry and a C2 entry. After receiving multicast packets, each vSwitch may forward the multicast packets using respective C1 entries and C2 entries (block 79) . The forwarding process may be as shown in FIG. 4, and not repeated herein.
The process of deleting C1 entries and C2 entries may be described in the following with reference to a multicast group leave process.
FIG. 8 is a flowchart illustrating a method of leaving a multicast group in accordance with examples of the present disclosure. As shown in FIG. 8, the method 80 may include the following procedures.
At block 81, a VM in an overlay network, e.g., VM 21 in FIG. 1, may send a multicast group leave packet which includes an identity of an overlay network multicast group which the VM wants to leave, and the identity may be denoted as G2.
At block 82, vSwitch 2 connected with VM 21 may receive the multicast group leave packet and send the multicast group leave packet to the SDN controller. 
At block 83, the SDN controller may receive the multicast group leave packet sent by vSwitch 2, search in stored relations for a relation which includes: a tenant ID of the tenant of VM 21 (e.g., VPCID 2) , G2. The relation may also include an uMGID (denoted by G2’ ) corresponding to G2.
At block 84, the SDN controller may create a C1 entry for deletion using VPCID 2, G2, G2’ and a vport connecting vSwitch 2 to VM 21 which serves as an egress port, and send the C1 entry for deletion to vSwitch 2. For each vSwitch other than vSwitch 2, the SDN controller may create a C2 entry for deletion using VPCID 2, G2, G2’ and a port connecting the vSwitch to the underlay network which serves as an egress port, and send the C2 entry for deletion to the vSwitch. The SDN controller may decrease a counting value in the relation by 1, judge whether the decreased counting value equals the pre-determined value, and delete the relation if the decreased counting value equals the pre-determined value.
According to an example, before deleting the relation at block 84, the SDN controller may mark the uMGID in the relation as unoccupied, i.e., making the uMGID available for other oMGID.
At block 85, subsequent to block 84, vSwitch 2 may receive the C1 entry for deletion sent by the SDN controller, search for a C1 entry identical to the C1 entry for deletion and delete the C1 entry found. After that, vSwitch 2 may check whether vSwitch 2 stores a C1 entry which includes: the uMGID in the C1 entry for deletion, terminate the process if the C1 entry is found, or create a multicast group leave packet which includes the uMGID in the C1 entry for deletion to leave the underlay network multicast group if vSwitch 2 does not store a C1 entry which includes the uMGID.
At block 86, subsequent to block 84, a vSwitch other than vSwitch 2 may receive the C2 entry for deletion sent by the SDN controller, search for a C2 entry identical to the C2 entry for deletion. If a C2 entry identical to the C2 entry for deletion is found, the vSwitch may decrease a counting value corresponding to the found C2 entry by 1, and check whether the counting value equals the pre-determined value after 1 is deducted from the counting value. If the counting value equals the pre-determined value, the vSwitch may delete the C2 entry found. If the counting value does not equal the pre-determined value, the vSwitch may terminate the process.
Hence, the process as shown in Fig. 8 is completed.
FIG. 9 is a schematic diagram illustrating an apparatus 90 of forwarding multicast packets in accordance with examples of the present disclosure. As shown in  FIG. 9, the apparatus 90 may include: a sending unit 91, a C1 entry unit 92, a C2 entry unit 93, and a packet unit 94.
The sending unit 91 may send a multicast group join packet from a VM to an SDN controller.
The C1 entry unit 92 may receive and store a C1 entry sent by the SDN controller in response to the multicast group join packet. The C1 entry may include: an egress port, a VNI of a VXLAN that the egress port belongs to, a tenant ID, an oMGID, and an uMGID. The C1 entry unit 92 may create a multicast group join packet corresponding to the uMGID to join a multicast group corresponding to the uMGID.
The C2 entry unit 93 may receive and store a C2 entry sent by the SDN controller in response to a multicast group join packet sent by another vSwitch, and set a counting value corresponding to the C2 entry. The counting value is set to be a pre-determined value. The C2 entry may include a tenant ID, an oMGID, an uMGID, and an egress port.
The packet unit 94 may receive a multicast packet from a VM, search in stored C1 entries and C2 entries for an entry for forwarding the multicast packet, and forward the multicast packet using an entry found. The packet unit 94 may also receive a multicast packet from the underlay network, search in the stored C1 entries for an entry for forwarding the multicast packet, and forward the multicast packet using an entry found.
In an example, before storing the C2 entry and setting the counting value corresponding to the C2 entry, the C2 entry unit 93 may search stored C2 entries for an entry identical to the received C2 entry, increase a counting value corresponding to the found entry by 1 if the entry identical to the received C2 entry is found, or perform the procedure of storing the C2 entry and setting the counting value corresponding to the C2 entry if the entry identical to the received C2 entry is not found.
In an example, the sending unit 91 may send a multicast group leave packet received from a VM to the SDN controller.
The C1 entry 92 may also receive a C1 entry for deletion sent by the SDN controller in response to a multicast group leave packet sent by the vSwitch, search in stored C1 entries for a C1 entry identical to the received C1 entry for deletion, and delete  the C1 entry found. The C1 entry 92 may also check whether remaining C1 entries include a C1 entry which includes: the uMGID in the received C1 entry for deletion.
If the stored C1 entries include a C1 entry including the uMGID, the C1 entry unit 92 may terminate the process. If the remaining C1 entries do not include a C1 entry which includes the uMGID, the C1 entry unit 92 may create a multicast group leave packet which includes the uMGID in the C1 entry for deletion to leave the multicast group in the underlay network corresponding to the uMGID in the C1 entry for deletion.
According to examples, the C2 entry unit 93 may receive a C2 entry for deletion sent by the SDN controller in response to a multicast group leave packet sent by another vSwitch. The C2 entry unit 93 may search in stored C2 entries for a C2 entry identical to the received C2 entry for deletion. The C2 entry unit 93 may decrease a counting value corresponding to the found C2 entry by 1, and check whether the counting value equals the pre-determined value after 1 is deducted from the counting value. If the counting value equals the pre-determined value, the C2 entry unit 93 may delete the C2 entry found. If the counting value does not equal the pre-determined value, the C2 entry unit 93 may terminate the process.
According to some examples, the packet unit 94 may search in stored C1 entries and C2 entries for an entry for forwarding a multicast packet, and forward the multicast packet using the found entry. The process of forwarding the multicast packet may include the following procedures.
The packet unit 94 may search in the stored C1 entries for an entry which includes: a tenant ID corresponding to the multicast packet, an oMGID in the multicast packet. If the C1 entry is found and an egress port in the C1 entry is not the port through which the multicast packet is received, the packet unit 94 may forward the multicast packet according to a VNI and the egress port in the C1 entry.
The packet unit 94 may search in stored C2 entries for a C2 entry which includes: a tenant ID corresponding to the multicast packet, an oMGID in the multicast packet. If the C2 entry is found, the packet unit 94 may forward the multicast packet using an uMGID and an egress in the C2 entry.
According to some examples, the packet unit 94 may search in the stored C1 entries for an entry which includes: an uMGID, and forward the multicast packet using an oMGID, a VNI and an egress port in the found C1 entry if the C1 entry is found.
It should be understood that in the above processes and structures, not all of the procedures and modules are necessary. Certain procedures or modules may not be present in certain implementations according to the needs of the user. The order of the procedures is not fixed, and can be adjusted. The modules are defined based on function simply for facilitating description. In implementation, a module may be implemented by multiple modules, and functions of multiple modules may be implemented by the same module. The modules may reside in the same device or distribute in different devices. The “first” , “second” in the above descriptions are merely for distinguishing two similar objects, and have no substantial meanings.
The scope of the claims should not be limited by the embodiments set forth in the examples, but should be given the broadest interpretation consistent with the description as a whole.

Claims (14)

  1. A method of forwarding a multicast packet, comprising:
    after receiving a first multicast packet from an underlay network, determining, by a virtual switch (vSwitch) , a first virtual machine (VM) corresponding to a first underlay network multicast group identity (uMGID) and a first overlay network multicast group identity (oMGID) in the first multicast packet by using a category-1 entry (C1 entry) , performing VXLAN decapsulation on the first multicast packet and sending the decapsulated first multicast packet to the first VM;
    after receiving a second multicast packet from a second VM, determining a second uMGID corresponding to a tenant ID of a tenant to which the second VM belongs and a second oMGID in the second multicast packet by using a category-2 entry (C2 entry) , performing VXLAN encapsulation on the second multicast packet using the second uMGID, and forwarding the encapsulated second multicast packet to the underlay network.
  2. The method of claim 1, wherein the C1 entry comprises: a tenant ID, an oMGID, an uMGID and an egress port;
    wherein determining the first VM by looking up the C1 entry comprises: searching for a C1 entry which matches the first uMGID and the first oMGID, and determining the first VM using the egress port in the C1 entry.
  3. The method of claim 1, wherein the C2 entry comprises: a tenant ID, an oMGID, an uMGID and an egress port;
    wherein determining the second uMGID by using the C2 entry comprises: searching for a C2 entry which matches the tenant ID of the tenant to which the second VM belongs and the second oMGID, determining the uMGID in the C2 entry as the second uMGID, performing VXLAN encapsulation on the second multicast packet, and forwarding the encapsulated second multicast packet to the underlay networking via the egress port in the C2 entry.
  4. The method of claim 2, further comprising: obtaining the C1 entry; wherein obtaining the C1 entry comprises:
    receiving a first multicast group join packet from the first VM, and providing the first multicast group join packet to an SDN controller which searches for the first uMGID corresponding to a tenant ID of a tenant to which the first VM belongs and the first oMGID in the first multicast group join packet and generates a C1 entry; and
    receiving and storing the C1 entry sent by the SDN controller.
  5. The method of claim 3, further comprising: obtaining the C2 entry; wherein obtaining the C2 entry comprises:
    receiving the C2 entry sent by the SDN controller;
    wherein the C2 entry is created by the SDN controller in response to a multicast group join packet sent by another vSwitch.
  6. The method of claim 2, further comprising: deleting the C1 entry; wherein deleting the C1 entry comprises:
    receiving a first multicast group leave packet from the first VM, and providing the first multicast group leave packet to an SDN controller which searches for the first uMGID corresponding to a tenant ID of a tenant to which the first VM belongs and the first oMGID in the first multicast group leave packet and generates a C1 entry for deletion; and
    receiving the C1 entry for deletion sent by the SDN controller; and
    searching in stored C1 entries for a C1 entry identical to the C1 entry for deletion, and deleting the C1 entry found.
  7. The method of claim 3, further comprising: deleting the C2 entry; wherein deleting the C2 entry comprises:
    receiving a C2 entry for deletion sent by the SDN controller;
    searching in stored C2 entries for a C2 entry identical to the C2 entry for deletion, and deleting the C2 entry found;
    wherein the C2 entry for deletion is created by the SDN controller in response to a multicast group leave packet sent by another vSwitch.
  8. A device, comprising a processor and a non-transitory storage medium which stores logic instructions for forwarding a multicast packet, wherein the logic instructions are executable by the processor to:
    after receiving a first multicast packet from an underlay network, determine a first virtual machine (VM) corresponding to a first underlay network multicast group identity (uMGID) and a first overlay network multicast group identity (oMGID) in the first multicast packet by using a category-1 entry (C1 entry) , perform VXLAN decapsulation on the first multicast packet and send the decapsulated first multicast packet to the first VM;
    after receiving a second multicast packet from a second VM, determine a second uMGID corresponding to a tenant ID of a tenant to which the second VM belongs and a second oMGID in the second multicast packet by using a category-2 entry (C2 entry) , perform VXLAN encapsulation on the second multicast packet using the second uMGID, and forward the encapsulated second multicast packet to the underlay network.
  9. The device of claim 8, wherein
    the C1 entry comprises: a tenant ID, an oMGID, an uMGID, and an egress port;
    the logic instructions are executable by the processor to:
    search for a C1 entry which matches the first uMGID and the first oMGID, and determine the first VM using the egress port in the C1 entry found.
  10. The device of claim 8, wherein
    the C2 entry comprises: a tenant ID, an oMGID, an uMGID, and an egress port;
    the logic instructions are executable by the processor to:
    search for a C2 entry which matches the tenant ID of the tenant of the second VM and the second oMGID, determine the second uMGID to be the uMGID in the C2 entry found, and forward the second multicast packet which was processed through VXLAN encapsulation to the underlay network through the egress port in the C2 entry.
  11. The device of claim 9, wherein the logic instructions are executable by the processor to obtain the C1 entry by:
    receiving a first multicast group join packet from the first VM, and providing the first multicast group join packet to an SDN controller which searches for the first uMGID corresponding to a tenant ID of a tenant to which the first VM belongs and the first oMGID in the first multicast group join packet and generates a C1 entry; and
    receiving and storing the C1 entry sent by the SDN controller.
  12. The device of claim 10, wherein the logic instructions are executable by the processor to obtain the C2 entry by:
    receiving the C2 entry sent by the SDN controller;
    wherein the C2 entry is created by the SDN controller in response to a multicast group join packet sent by a second vSwitch.
  13. The device of claim 9, wherein the logic instructions are executable by the processor to delete the C1 entry by:
    receiving a first multicast group leave packet from the first VM, and providing the first multicast group leave packet to an SDN controller which searches for the first uMGID corresponding to a tenant ID of a tenant to which the first VM belongs and the first oMGID in the first multicast group leave packet and generates a C1 entry for deletion;
    receiving the first C1 entry for deletion sent by the SDN controller; and
    searching in stored C1 entries for a C1 entry identical to the first C1 entry for deletion, and deleting the C1 entry found.
  14. The device of claim 10, wherein the logic instructions are executable by the processor to delete the C2 entry by:
    receiving a C2 entry for deletion sent by the SDN controller;
    wherein the C2 entry for deletion is created by the SDN controller in response to a multicast group leave packet sent by a third vSwitch.
PCT/CN2016/077480 2015-03-27 2016-03-28 Forwarding multicast packets WO2016155589A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510137621.9A CN106161259B (en) 2015-03-27 2015-03-27 The multicast data packet forwarding method and apparatus of virtual extended local area network VXLAN
CN201510137621.9 2015-03-27

Publications (1)

Publication Number Publication Date
WO2016155589A1 true WO2016155589A1 (en) 2016-10-06

Family

ID=57003870

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/077480 WO2016155589A1 (en) 2015-03-27 2016-03-28 Forwarding multicast packets

Country Status (2)

Country Link
CN (1) CN106161259B (en)
WO (1) WO2016155589A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10965497B1 (en) 2019-10-10 2021-03-30 Metaswitch Networks Ltd. Processing traffic in a virtualised environment

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106789529B (en) * 2016-12-16 2020-04-14 平安科技(深圳)有限公司 Method and terminal for implementing OVERLAY network
CN108234230B (en) * 2016-12-21 2019-10-18 中国移动通信有限公司研究院 A kind of path following method, apparatus and system
CN108512671A (en) * 2017-02-24 2018-09-07 华为技术有限公司 A kind of outer layer multicast ip address distribution method and device
CN108494691B (en) * 2018-06-22 2021-02-26 新华三技术有限公司 Multicast forwarding method and device and tunnel endpoint equipment
CN109167731B (en) * 2018-08-30 2021-06-08 新华三技术有限公司 Message sending method and device
CN112995005B (en) * 2019-12-17 2022-02-25 北京百度网讯科技有限公司 Virtual network data exchange method and device
CN113507425B (en) * 2021-06-22 2023-11-07 新华三大数据技术有限公司 Overlay multicast method, device and equipment
CN115242708B (en) * 2022-07-21 2023-10-20 迈普通信技术股份有限公司 Multicast table item processing method and device, electronic equipment and storage medium
CN115665070A (en) * 2022-10-17 2023-01-31 浪潮思科网络科技有限公司 Message sending method, device, equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140016501A1 (en) * 2012-07-16 2014-01-16 International Business Machines Corporation Flow based overlay network
CN103684966A (en) * 2013-12-10 2014-03-26 华为技术有限公司 Method and device for processing dynamic host configuration protocol messages
US20140092907A1 (en) * 2012-08-14 2014-04-03 Vmware, Inc. Method and system for virtual and physical network integration
CN104350714A (en) * 2014-05-29 2015-02-11 华为技术有限公司 Packet forwarding method and VxLAN gateway

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101115006B (en) * 2007-08-21 2010-08-25 杭州华三通信技术有限公司 Three-layer packet forwarding method and routing device and two-layer switch module
CN102970227B (en) * 2012-11-12 2016-03-02 盛科网络(苏州)有限公司 The method and apparatus of VXLAN message repeating is realized in ASIC
US9325636B2 (en) * 2013-06-14 2016-04-26 Cisco Technology, Inc. Scaling interconnected IP fabric data centers
CN104468394B (en) * 2014-12-04 2018-02-09 新华三技术有限公司 Message forwarding method and device in a kind of VXLAN networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140016501A1 (en) * 2012-07-16 2014-01-16 International Business Machines Corporation Flow based overlay network
US20140092907A1 (en) * 2012-08-14 2014-04-03 Vmware, Inc. Method and system for virtual and physical network integration
CN103684966A (en) * 2013-12-10 2014-03-26 华为技术有限公司 Method and device for processing dynamic host configuration protocol messages
CN104350714A (en) * 2014-05-29 2015-02-11 华为技术有限公司 Packet forwarding method and VxLAN gateway

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10965497B1 (en) 2019-10-10 2021-03-30 Metaswitch Networks Ltd. Processing traffic in a virtualised environment
GB2588161A (en) * 2019-10-10 2021-04-21 Metaswitch Networks Ltd Processing traffic in a virtualised environment
GB2588161B (en) * 2019-10-10 2021-12-22 Metaswitch Networks Ltd Processing traffic in a virtualised environment

Also Published As

Publication number Publication date
CN106161259B (en) 2019-02-12
CN106161259A (en) 2016-11-23

Similar Documents

Publication Publication Date Title
WO2016155589A1 (en) Forwarding multicast packets
US10003571B2 (en) Method and apparatus for implementing communication between virtual machines
US11190435B2 (en) Control apparatus, communication system, tunnel endpoint control method, and program
US9871721B2 (en) Multicasting a data message in a multi-site network
US10305801B2 (en) Forwarding data packets
US10205657B2 (en) Packet forwarding in data center network
US9231863B2 (en) Systems and methods for a data center architecture facilitating layer 2 over layer 3 communication
DE102012220834B4 (en) Method and device for implementing a flexible virtual local network
US9973419B2 (en) Routing management method, routing method, network controller, and router
US10367717B2 (en) Processing a flow entry in VXLAN
WO2016041521A1 (en) Migration of virtual machines
EP3313025A2 (en) Data packet forwarding
AU2014399458A1 (en) Flow Entry Configuration Method, Apparatus, and System
US10313154B2 (en) Packet forwarding
CN107645431B (en) Message forwarding method and device
US9413635B2 (en) Method for controlling generation of routing information, method for generating routing information and apparatuses thereof
CN108259304B (en) Forwarding table item synchronization method and device
WO2016177316A1 (en) Multicast data packet forwarding
WO2015135499A1 (en) Network virtualization
WO2016115698A1 (en) Data packet forwarding method, apparatus and device
US10313275B2 (en) Packet forwarding
EP3086512B1 (en) Implementation method and apparatus for vlan to access vf network and fcf
US10313274B2 (en) Packet forwarding
WO2018010519A1 (en) Method and apparatus for establishing multicast tunnel
CN110401726B (en) Method, device and equipment for processing address resolution protocol message and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16771344

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16771344

Country of ref document: EP

Kind code of ref document: A1