WO2018010626A1 - 云端数据组播方法、系统和计算机设备 - Google Patents

云端数据组播方法、系统和计算机设备 Download PDF

Info

Publication number
WO2018010626A1
WO2018010626A1 PCT/CN2017/092432 CN2017092432W WO2018010626A1 WO 2018010626 A1 WO2018010626 A1 WO 2018010626A1 CN 2017092432 W CN2017092432 W CN 2017092432W WO 2018010626 A1 WO2018010626 A1 WO 2018010626A1
Authority
WO
WIPO (PCT)
Prior art keywords
multicast
address
routing tree
filtering
list
Prior art date
Application number
PCT/CN2017/092432
Other languages
English (en)
French (fr)
Inventor
张怡文
陈康
刘尧甫
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP17826962.7A priority Critical patent/EP3487131B1/en
Publication of WO2018010626A1 publication Critical patent/WO2018010626A1/zh
Priority to US16/240,252 priority patent/US10958723B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1044Group management mechanisms 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/185Arrangements for providing special services to substations for broadcast or conference, e.g. multicast with management of multicast group membership
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4633Interconnection of networks using encapsulation techniques, e.g. tunneling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4641Virtual LANs, VLANs, e.g. virtual private networks [VPN]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/16Multipoint routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/48Routing tree calculation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/80Actions related to the user profile or the type of traffic
    • H04L47/806Broadcast or multicast traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/20Support for services
    • H04L49/201Multicast operation; Broadcast operation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/50Address allocation
    • H04L61/5007Internet protocol [IP] addresses
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/50Address allocation
    • H04L61/5069Address allocation for group communication, multicast communication or broadcast communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1863Arrangements for providing special services to substations for broadcast or conference, e.g. multicast comprising mechanisms for improved reliability, e.g. status reports
    • H04L12/1877Measures taken prior to transmission

Definitions

  • the present application relates to the field of cloud computing technologies, and in particular, to a cloud data multicast method, system, and computer device.
  • Multicast technology is one of the ways of data transmission in IP networks.
  • the multicast technology effectively solves the problem of single-point transmission and multi-point reception, and realizes efficient data transmission from single point to multiple points in the IP network.
  • cloud computing multiple tenants can share a single data center by isolating multiple tenants.
  • tenant isolation there is no concept of tenant isolation. Any device in the network can join any multicast group to receive data, and the security of the data cannot be effectively guaranteed. How to implement secure and effective multicast in the cloud has become a technical problem that needs to be solved.
  • a cloud data multicast method, system, and computer device are provided.
  • a cloud data multicast method comprising:
  • the multicast packet carries a tenant identifier, a destination address, and a source address.
  • the multicast packet is encapsulated, and the encapsulated multicast packet is delivered to the multicast member that needs the multicast packet according to the member address list and the routing tree.
  • a cloud data multicast system comprising:
  • the multicast gateway cluster is configured to obtain a multicast packet in the cloud, where the multicast packet carries a tenant identifier, a destination address, and a source address, and searches for a corresponding multicast group according to the tenant identifier and the destination address, where the multicast The group includes multiple multicast members;
  • a central controller configured to calculate a route corresponding to the multicast member, and generate a routing tree according to a route corresponding to multiple multicast members
  • the multicast gateway cluster is further configured to obtain a member address corresponding to the multicast member, perform address filtering according to the source address and the member address, and obtain a list of member addresses that need the multicast packet; Encapsulating the packet, and sending the encapsulated multicast packet to the host corresponding to the member address according to the member address list and the routing tree;
  • the host device is configured to receive the encapsulated multicast packet, and deliver the encapsulated multicast packet to the multicast member that needs the multicast packet.
  • a computer device comprising a memory and a processor, the memory storing computer readable instructions, the instructions being executed by the processor, causing the processor to perform the following steps:
  • the multicast packet carries a tenant identifier, a destination address, and a source address;
  • Encapsulating the multicast packet, and encapsulating according to the member address list and the routing tree The subsequent multicast packet is delivered to the multicast member that needs the multicast packet.
  • FIG. 1 is a hardware environment diagram of a cloud data multicast method in an embodiment
  • FIG. 2 is a block diagram of the cloud in FIG. 1;
  • FIG. 3 is a flowchart of a cloud data multicast method in an embodiment
  • Figure 5 is a block diagram of a data plane in an embodiment
  • FIG. 6 is a data structure diagram of a multicast group routing tree in an embodiment
  • FIG. 7 is a data structure diagram of a host routing tree in an embodiment
  • Figure 8 is a block diagram of a computer device in one embodiment
  • Figure 9 is a block diagram of a computer device in another embodiment.
  • Figure 10 is a block diagram of a cloud data multicast system in one embodiment.
  • first may be used to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another.
  • a client is referred to as a second client, and similarly, a second client can be referred to as a first client. Both the first client and the second client are clients, but they are not the same client.
  • the cloud data multicast method provided in the embodiment of the present application can be applied to the hardware environment shown in FIG. 1.
  • the terminal 102 is connected to the cloud 104 through a network, and the cloud 104 is connected to the host 106 through the network.
  • the tenant establishes a multicast application through the cloud 104 through the terminal 102, and uploads the source data to the cloud 104 through the multicast application.
  • the cloud 104 acquires source data and performs multicast in the form of a multicast packet.
  • the multicast packet carries the tenant ID, destination address, and source address.
  • the cloud 104 searches for a corresponding multicast group according to the tenant ID and the destination address, and calculates a routing tree corresponding to multiple multicast members in the multicast group.
  • the cloud 104 obtains the member addresses corresponding to the multiple multicast members, performs address filtering according to the source address and the member address, and obtains a list of member addresses that require multicast packets to encapsulate the multicast packets, and encapsulates them according to the member address list and the routing tree.
  • the subsequent multicast message is sent to the host 106 corresponding to the member address.
  • the host 106 receives the encapsulated multicast packet and delivers the encapsulated multicast packet to the multicast member that needs the multicast packet. This enables secure and efficient data multicasting in the cloud.
  • the cloud can be implemented by a standalone server or a server cluster composed of multiple servers.
  • the cloud is a standalone server, and its block diagram is shown in FIG. 2.
  • the server includes a processor coupled through a system bus, a non-volatile storage medium, an internal memory, and a network interface.
  • the cloud non-volatile storage medium stores an operating system and computer readable instructions that are executed by the processor to implement a cloud data multicasting method.
  • the cloud's processor is used to provide computing and control capabilities to support the entire cloud.
  • the cloud's internal memory provides an environment for the operation of operating systems and computer readable instructions in a non-volatile storage medium.
  • the network interface of the cloud is used for communicating with an external terminal through a network connection, for example, receiving source data sent by the terminal and delivering multicast messages by the multicast member.
  • a network connection for example, receiving source data sent by the terminal and delivering multicast messages by the multicast member.
  • a cloud data multicasting method is provided.
  • the method is applied to the cloud as an example, and specifically includes:
  • Step 302 Obtain a multicast packet in the cloud, where the multicast packet carries a tenant identifier, a destination address, and a source address.
  • the cloud includes a shared cloud and a private cloud.
  • a public cloud is usually a cloud that a third-party provider can provide to users.
  • Private cloud refers to a cloud built for a single tenant.
  • the tenant uses the multicast application running in the terminal to upload the source data to the data center of the cloud.
  • the cloud multicasts the source data in the form of multicast packets.
  • Multicast packets can be encapsulated in Overlay mode.
  • the Overlay network is a deep extension of the physical network to the cloud and virtualization, so that the resource utilization capability of the cloud can be separated from the various limitations of the physical network.
  • the multicast packet carries the tenant ID, destination address, and source address.
  • the cloud can determine the tenant to which the multicast packet belongs according to the tenant ID, so that the corresponding VPC (Virtual Private Cloud, a network logical isolation partition service) can be determined.
  • the cloud can be aggregated by using the Overlay processing logic to effectively isolate the IP addresses of the different tenants (Internet Protocol). This can effectively improve the security of multicast data.
  • Step 304 Search for a corresponding multicast group according to the tenant ID and the destination address, where the multicast group includes multiple multicast members.
  • the cloud can find the corresponding multicast group based on the tenant ID and destination address.
  • a multicast group consists of multiple multicast members.
  • a multicast member can be a virtual machine on a cloud virtualization platform.
  • the virtual machine runs on the host machine.
  • the virtual machines running on the host can be distributed, and different virtual machines can join different multicast groups.
  • the virtual machine receives multicast packets through the host.
  • the virtual machine (ie, multicast member) at this time may also be referred to as a data receiver.
  • multicast applications including multimedia applications, data warehousing, and financial applications. Taking the stock data push in financial applications as an example, after receiving the stock data of the stock center, the cloud delivers the stock data to the multicast members through multicast.
  • Multicast members can also act as data senders.
  • the multicast member uses the multicast application to overwrite and upload the source data to be uploaded to the data center of the cloud.
  • the cloud delivers the source data to other multicast members in the multicast group in a multicast manner as described above.
  • the data sender can also be a non-multicast member.
  • the non-multicast member uses the multicast application to overwrite and upload the source data to be sent to the data center of the cloud.
  • Step 306 Obtain a route corresponding to the multicast member, and generate a route according to multiple multicast members. Become a routing tree.
  • the central controller of the cloud calculates the corresponding route according to the report message fed back by the multicast member.
  • the route corresponding to multiple multicast members on the same host can generate the host route tree.
  • Different hosts can generate different host routing trees.
  • a virtual machine running on multiple hosts can be added to the same multicast group. Therefore, multiple host routing trees corresponding to multiple hosts in the same multicast group can form a multicast group routing tree.
  • the routing tree calculated by the central controller includes a host routing tree and a multicast group routing tree.
  • the central controller needs to obtain the change information of the multicast member in real time, so that the multicast member is based on the multicast member.
  • the change situation is updated in real time to calculate the host routing tree corresponding to the multicast member, and then update the multicast group routing tree.
  • Step 308 Obtain a member address corresponding to the multicast member, perform address filtering according to the source address and the member address, and obtain a list of member addresses that require multicast packets.
  • a multicast member When a multicast member joins a multicast group, it sends a corresponding report message.
  • the header of the report message includes the host IP address, and the packet includes the source address of the specified filter. Since the multicast member is a virtual machine running on the host, the host IP address can be regarded as the member address corresponding to the multicast member.
  • the source address filtering mode and source address corresponding to the multicast member can be determined according to the source address of the specified filtering carried in the packet header. Among them, source address filtering methods include INCLUDE, EXCLUDE, and GENERAL.
  • the INCLUDE indicates that only multicast packets sent from the specified multicast source to the multicast group are received.
  • EXCLUDE indicates that only multicast packets sent from the specified multicast source are received.
  • GENERAL represents a regular query for querying joined multicast members and/or outgoing multicast members in a multicast group.
  • INCLUDE can be called the first filtering method
  • EXCLUDE can be called the second filtering method
  • GENERAL can be called the third filtering method.
  • the multicast multicast gateway in the cloud filters the multicast members based on the source address, member address, and source address filtering mode to obtain a list of member addresses that require multicast packets.
  • step 310 the multicast packet is encapsulated, and the encapsulated multicast packet is delivered to the multicast member that needs the multicast packet according to the member address list and the routing tree.
  • the multicast gateway cluster supports Layer 2 multicast forwarding for multicast packets and encapsulates the multicast packets in the Overlay format.
  • the Overlay format includes a GRE (Generic Routing Encapsulation) encapsulation format, a Virtual Extensible Local Area Network (VXLAN), and a NVGRE (Universal Routing). Package) package format, etc.
  • the multicast gateway cluster obtains the number of hosts in the multicast group based on the member address list, and determines the number of copies that the multicast packet needs to be copied according to the number of hosts.
  • the multicast gateway cluster delivers the encapsulated multicast packets to the host corresponding to the multicast member that needs to multicast packets according to the number of replicated packets according to the member address list and the routing tree.
  • the host receives the encapsulated multicast packet and parses the encapsulated multicast packet to obtain the parsed multicast packet.
  • the host passes the parsed multicast packet to the virtual machine in the multicast group running on the
  • the multicast packet in the cloud carries the tenant ID, the destination address, and the source address, so that the corresponding multicast group can be accurately found according to the tenant ID and the destination address.
  • the IP address space of different tenants is effectively isolated.
  • the member address list of the multicast packet is obtained.
  • a routing tree is generated according to multiple routes by calculating corresponding routes for multiple multicast members in the multicast group. Encapsulating the multicast packet, so that the encapsulated multicast packet forwarded according to the member address list and the routing tree can be accurately delivered to the multicast member that needs the multicast packet, ensuring the delivery of the multicast data. accuracy. This enables secure and efficient data multicasting in the cloud.
  • the step of calculating a route corresponding to the multicast member, and generating a routing tree according to the route corresponding to the multiple multicast members includes: obtaining a report message returned by the multiple multicast members; and calculating the multicast member according to the report message.
  • Corresponding route generate a routing tree according to the route corresponding to the multiple multicast members; obtain the topology structure of the routing tree used by the data plane, and write the routing tree corresponding to multiple multicast members to the route used by the data plane according to the routing tree topology structure In the tree data structure.
  • IP multicast technology controls and manages the operation of all network protocols through the control plane. Plane to process and forward data on different interfaces.
  • the control plane and the data plane are not separated and run on the same device, the complexity of the single-point device is improved, the reliability is reduced, and the operation and maintenance and troubleshooting are more difficult.
  • the cloud completely separates the control plane and the data plane, and the two run on different devices.
  • the control plane runs on the central controller, and the data plane runs on the multicast gateway cluster.
  • the architecture diagram of the control plane is shown in Figure 4.
  • the virtual machine and multicast agent are running on the host.
  • the central controller establishes communication with the multicast proxy over the network.
  • the multicast gateway agent and kernel are running in the multicast gateway cluster.
  • the multicast gateway cluster is responsible for data forwarding of the data plane. Specifically, the multicast gateway cluster can implement data forwarding of the data plane through custom Overlay processing logic in the general server linux kernel.
  • the central controller establishes communication with the multicast gateway agent through the network.
  • the multicast agent follows the IGMP (Internet Group Management Protocol) protocol, periodically sends regular queries to all virtual machines on the host, joins the multicast virtual machine response report message, and obtains the multicast proxy. Report message.
  • IGMP Internet Group Management Protocol
  • the application scope of the IMGP protocol does not exceed the host machine, and the multicast proxy obtains the report message before querying the route of the host.
  • the distributed multicast proxy sends the report message corresponding to the multicast member in the host to the central controller, and the central controller can calculate the route corresponding to the multicast member according to the report message, and then according to the multiple multicast members.
  • the route generates a host routing tree and a multicast group routing tree.
  • the multicast gateway agent running on the multicast gateway cluster reads the host routing tree and the multicast group routing tree, and uses the topology of the routing tree used by the data plane to write the read host routing tree and multicast group routing tree.
  • the data plane is used in the routing tree data structure. Since the virtual machine running on the host can join the multicast group as a multicast member at any time, the multicast member can also exit the multicast group at any time.
  • the multicast proxy sends the change information of the multicast member to the central controller in real time.
  • the central controller updates and calculates the host routing tree and the multicast group routing tree according to the change information of the multicast member.
  • the multicast gateway agent running on the multicast gateway cluster reads the updated host routing tree and the updated multicast group routing tree, and uses the topology of the routing tree used by the data plane to read the updated host route.
  • the tree and the updated multicast group routing tree are written into the routing tree data structure used by the data plane. Thereby completing a routing update.
  • the host routing tree and the multicast group routing tree are updated in real time according to the changes of multicast members. Therefore, the multicast packet of the multicast gateway cluster can be accurately forwarded.
  • communication between the multicast agent and the central controller can also be established through the middleware.
  • Communication between the central controller and the multicast gateway agent can also be established through the middleware.
  • the multicast gateway cluster can also use a dedicated network development kit to implement data forwarding in the data plane.
  • the middleware between the multicast proxy and the central controller is the message queue
  • the middleware between the central controller and the multicast gateway proxy is Zookeeper (a distributed, open source distributed application coordination service). Supplementary instructions for the process of routing updates.
  • the distributed multicast agent obtains report messages replied by multiple virtual machines, and can learn the changes of multiple multicast members on the host according to the report messages.
  • the distributed multicast proxy sends the change information of the multicast members on the host to the message queue middleware.
  • the independently deployed central controller reads the change information of the multicast member from the message queue middleware, calculates the updated host routing tree and the updated multicast group routing tree according to the change information, and updates the updated host.
  • the routing tree and the updated multicast group routing tree are written to the Zookeeper middleware.
  • the multicast gateway agent reads the updated host routing tree and the updated multicast group routing tree from the Zookeeper, adopts the topology of the routing tree used by the data plane, and reads the updated host routing tree and the updated The multicast group routing tree is written in the routing tree data structure used by the data plane. Thereby completing a routing update.
  • the architecture of the data plane is shown in Figure 5.
  • the data receiving device 502 is configured to receive a multicast packet uploaded by the tenant through the multicast application.
  • the multicast gateway cluster 504 is configured to copy and forward the multicast packet of the data plane to the host 506 corresponding to the multicast member that needs the multicast packet.
  • the multicast gateway cluster 504 can use multiple general-purpose servers to implement multicast packet forwarding.
  • a data center uses a group of multicast gateways (MCGWs, which are devices that copy and forward multicast data packets).
  • the multicast gateway cluster includes multiple forwarding devices.
  • the forwarding device can use a general purpose server. Each forwarding device is completely peer-to-peer in the above logic, and each multicast packet can obtain the same processing result on any one of the forwarding devices.
  • the forwarding device uses a tree-like routing data structure when forwarding. Specifically, the forwarding device may forward the multicast data by using the number of multicast groups.
  • control plane and the data plane run on different devices and are completely separated. This reduces the complexity of the single-point device, improves its reliability, and reduces the difficulty of operation and maintenance and troubleshooting.
  • the routing tree includes a host routing tree and a multicast group routing tree. After the step of calculating a route corresponding to the multicast member and generating a routing tree according to the route corresponding to the multiple multicast members, the routing tree further includes: traversing the host route The node of the tree obtains the incremental change message of the host routing tree; updates the multicast group routing tree according to the incremental change message of the host routing tree.
  • the central controller calculates the route of the multicast member through one or more processes.
  • the route change corresponding to the same tenant ID is calculated by the same process.
  • a process can process routes corresponding to multiple tenant IDs.
  • a route corresponding to multiple multicast members can be represented by a topology diagram of a tree data structure. Routes corresponding to multiple multicast members on the same host can form a host route tree. Multiple host routing trees corresponding to multiple hosts in the same multicast group can form a multicast group routing tree.
  • the data structure of the multicast group routing tree is shown in Figure 6.
  • the root node of the multicast group routing tree corresponds to the VPC, and the corresponding root node can be located according to the multicast packet.
  • the leaf node corresponds to the host to which the multicast message needs to be sent. Since the transmission of multicast packets needs to be implemented through a physical network, the leaf nodes can only correspond to the host, and do not need to correspond to virtual machines running on the host.
  • Each multicast group has a corresponding multicast address.
  • VPCID indicates a VPC identity (ie, a tenant identity)
  • GROUP indicates a multicast address
  • IP indicates a source address
  • HOSTIP indicates a multicast member.
  • the central controller can use the host routing tree to represent the relationship between multiple multicast members in the same multicast group.
  • the data structure of the host routing tree is shown in Figure 7.
  • the root node of the host routing tree corresponds to the VPC
  • the primary sub-node is the host address (that is, the member address)
  • the second-level sub-node is the multicast address.
  • the child node is the source address. The address filtering is performed on the source address and the member in the corresponding filtering mode to ensure that the multicast member obtains the required multicast packet.
  • the central controller traverses the node of the host routing tree, obtains the incremental change message of the host routing tree, and updates the multicast group routing tree according to the incremental change messages. .
  • the central controller traverses the nodes of the host routing tree through one or more processes.
  • the memory image corresponding to the host routing tree is obtained.
  • the data structure corresponding to the host routing tree is stored in the memory image corresponding to the host routing tree, and the new or deleted member address is determined according to the data structure of the stored host routing tree. If so, update the member address in the memory image.
  • the central controller determines whether a member address child node is added or deleted. Specifically, the central controller determines whether there are new or deleted multicast groups according to the memory mapping corresponding to the multicast group by one or more processes. When there is a new multicast group, it is judged whether there is a GENERAL child node, and if not, a V2 record group is added. Among them, V2 refers to the V2 version of IGMP protocol. Execute traversal of the EXCLUDE list and perform a traversal of the INCLUDE list and increment the multicast member count of the multicast group by one. This completes a traversal of the newly added multicast group.
  • the new multicast group is invoked. Returns if the multicast member in the multicast group has reached the upper limit.
  • the MCGroup indicates the multicast group
  • the vpcid indicates the tenant ID
  • the group indicates the multicast address
  • the ip indicates the source address
  • the hostip indicates the member address.
  • the IP of the INCLUDE list in the memory image corresponding to all host routing trees is deleted, and the member node of the deleted member address can be described as: MCGroup/$vpcid/$group/INCLUDE/$ip/$ Hostip node.
  • the source address in the EXCLUDE list in the memory image corresponding to all host routing trees is deleted, and the member address child node is deleted, wherein the deleted member address child node can be described as: MCGroup/$vpcid/$group/EXCLUDE/$ip /$hostip node.
  • the member address node under the GENERAL sub-node is deleted.
  • the deleted member address child node can be described as: MCGroup/$vpcid/$group/GENERAL/$hostip node.
  • the source address in the INCLUDE list in the image is deleted, and the member sub-address node under the INCLUDE node is deleted.
  • the deleted member address child node can be described as: MCGroup/$vpcid/$group/INCLUDE/$ip/$hostip node .
  • the deleted member address sub-node can be described as: MCGroup/$vpcid/$group/EXCLUDE /$ip/$hostip node. Delete the data in the memory image corresponding to the host routing tree and the data in the memory image corresponding to the multicast group routing tree, and decrement the count of the multicast group by one.
  • the node that traverses the host routing tree obtains the incremental change message of the host routing tree, and the multicast group routing tree according to the incremental change message. Perform an update calculation. Thereby an accurate routing tree can be obtained, thereby ensuring the accuracy of multicast data delivery.
  • the method further includes: receiving, by the terminal, a multicast group operation instruction sent by the terminal by using the multicast application, where the operation instruction carries a tenant identifier, a filtering mode, a filtering address set, and a multicast address; , filtering, filtering address sets, and multicast addresses to create a new multicast group or delete a multicast group.
  • the tenant can send a multicast operation instruction to the cloud through the multicast application running in the terminal, and the multicast operation instruction includes deleting a new multicast group to delete the multicast group.
  • the multicast group operation command received by the cloud is a new multicast group command
  • the new multicast group command carries the tenant ID, filtering mode, filtering address set, and multicast address.
  • the filter address set includes a source address corresponding to the filter type.
  • Filter types include INCLUDE, EXCLUDE, and GENERAL.
  • the multicast group command is deleted. It carries the tenant ID, filtering method, filtering address set, and multicast address. According to the filtering mode, the filtering address is traversed, and the source address child node corresponding to the filtering mode is deleted according to the tenant ID and the multicast address. The child node and the multicast address node under the medium multicast address node in the memory image corresponding to the host routing tree are deleted. At the same time, the multicast address count corresponding to the tenant ID is decremented by one. This completes the deletion of a multicast group.
  • the cloud triggers operations such as creating or deleting a multicast group by receiving a multicast group operation command sent by the terminal. Therefore, it is convenient for the tenant to appropriately adjust the corresponding multicast group according to their own needs, which provides convenience for the tenant.
  • the address filtering is performed according to the source address and the member address
  • the step of obtaining the member address list of the multicast packet includes: obtaining the report message corresponding to the multicast member, where the report message carries the member address and the source address.
  • the filtering method is used to filter the addresses of multiple multicast members according to the source address and member address to obtain the list of member addresses that require multicast packets.
  • a multicast member when a multicast member joins a multicast group, it sends a corresponding report message.
  • the header of the report packet includes the IP address of the host, and the source includes the specified source address and filtering mode.
  • the multicast gateway cluster finds the corresponding multicast group based on the tenant ID and destination address, it creates a temporary list.
  • the temporary list includes a first temporary list and a second temporary list.
  • the multicast gateway cluster filters the addresses of multiple multicast members in the filtering mode based on the source address and the member address to obtain a list of member addresses that require multicast packets.
  • the filtering of the addresses of the multiple multicast members according to the source address and the member address according to the filtering manner includes: obtaining the first filtering corresponding to the first filtering mode a list, traversing the first filter list, determining whether the member address in the first filter list is the same as the source address, and if so, copying the next-level multicast member to the first temporary list; obtaining the second corresponding to the second filtering mode Filtering the list, traversing the second filter list, determining whether the member address in the second filter list is the same as the source address, and if so, deleting the host corresponding to the next-level multicast member from the second temporary list; The list is merged with the second temporary list to obtain a list of member addresses that require multicast messages.
  • the multicast gateway cluster is a member of multiple multicast members corresponding to INCLUDE according to the filtering method.
  • the address generates an INCLUDE list, and generates an EXCLUDE list for the member addresses corresponding to multiple multicast members of the EXCLUDE according to the filtering method, and generates a GENERAL hash list by filtering the member addresses corresponding to the multiple multicast members of the GENERAL.
  • INCLUDE can be called the first filtering method
  • EXCLUDE can be called the second filtering method
  • GENERAL can be called the third filtering method.
  • the corresponding INCLUDE list may be referred to as a first filter list
  • the EXCLUDE list may be referred to as a second filter list
  • the GENERAL hash list may be referred to as a third filter list.
  • the multicast gateway cluster When the multicast gateway cluster traverses the INCLUDE list, it determines whether the member address in the INCLUDE list is the same as the source address, and if so, copies the next-level host member to the first temporary list. Copy the host member from the GENERAL hash list to the second temporary list.
  • the multicast gateway cluster traverses the EXCLUDE list to determine whether the EXCLUDE member address is the same as the source address. If yes, the next-level host member is deleted from the second temporary list. After the EXCLUDE list is traversed, the second temporary list is connected to the end of the first temporary list to generate a list of member addresses that require multicast messages.
  • the memory image corresponding to the host is obtained. If the general flag of the multicast group in the memory image corresponding to the host routing tree is true, the child node is unchanged and may not be processed.
  • the multicast address node in the memory image corresponding to the multicast group does not exist, you need to create a new multicast group.
  • the multicast address node can be described as: MCGroup/$vpcid/$group.
  • the INCLUDE subnode can be described as: MCHost/$vpcid/$hostip/$group/INCLUDE subnode.
  • the member address child node can be described as: MCGroup/$vpcid/$group/EXCLUDE/$ip/$hostip node.
  • the data in the memory image corresponding to the multicast group routing tree is updated, and the corresponding member address child node under the EXCLUDE sub-node is deleted, and the member address child node can be described as: MCGroup/$vpcid/$ Group/EXCLUDE/$ip/$hostip node. If the member address node is empty, the delete multicast group function is called, and the deleted source address is input, thereby deleting the source address.
  • the multicast gateway cluster traverses the EXCLUDE list, if the general flag of the multicast group in the memory image corresponding to the host routing tree is true, it will not be processed. If the multicast address node in the memory image corresponding to the multicast group does not exist, you need to create a new multicast group.
  • the multicast address node can be described as: MCGroup/$vpcid/$group. Call the new multicast group function and enter all EXCLUDE sub-nodes into the address filtering list.
  • the EXCLUDE subnode can be described as: MCHost/$vpcid/$hostip/$group/EXCLUDE subnode.
  • the member address child node can be described as: MCGroup/$vpcid/$group/GENERAL/$hostip node.
  • EXCLUDE sub-node If the EXCLUDE sub-node is empty in the memory image corresponding to the multicast group routing tree, create a member address sub-node under the EXCLUDE sub-node, which can be described as: MCGroup/$vpcid/$group/EXCLUDE/$ Ip/$hostip node. Otherwise, the member address child node corresponding to the other source address under the EXCLUDE sub-node is deleted, and the deleted member address child node can be described as: MCGroup/$vpcid/$group/EXCLUDE/other source address/$hostip node.
  • the data in the memory image corresponding to the multicast group routing tree is updated, and the corresponding member address child node under the EXCLUDE sub-node is deleted, and the member address child node can be described as: MCGroup/$vpcid/$ Group/EXCLUDE/$ip/$hostip node. If the member address child node is empty, the delete multicast group function is called, and the deleted source address is input, thereby deleting the source address.
  • the addition and subtraction of the child nodes in the host routing tree is updated in time to ensure the accuracy of the address filtering, thereby improving the multicast message.
  • the accuracy of delivery is updated in time to ensure the accuracy of the address filtering, thereby improving the multicast message.
  • the central controller controls the size of the multicast group. Specifically, the central controller obtains the number of leaf nodes in the group tree, and determines whether the number of leaf nodes is less than or equal to the number of leaf nodes corresponding to the tenant identifier. If no, the group is not allowed to join the multicast member. In the public cloud, if the number of leaf nodes corresponding to some tenants is too large, it will occupy too many resources, resulting in unbalanced resource utilization.
  • the central controller may also limit the frequent joins or exits of multicast members. Specifically, the central controller obtains the number of times the multicast member joins the multicast group or exits the multicast group within a preset time. If the number of times exceeds the set number of times, the multicast member is restricted from joining the multicast group or exiting the multicast group. Thereby effectively reducing resource consumption.
  • the present application further provides a computer device, the internal structure of which may correspond to the structure shown in FIG. 2, and each of the following modules may be wholly or partially implemented by software, hardware or a combination thereof.
  • the computer device includes an acquisition module 802, a lookup module 804, a calculation module 806, a filtering module 808, and a sending module 810, wherein:
  • the obtaining module 802 is configured to obtain a multicast packet in the cloud, where the multicast packet carries a tenant identifier, a destination address, and a source address.
  • the searching module 804 is configured to search for a corresponding multicast group according to the tenant ID and the destination address, where the multicast group includes multiple multicast members.
  • the calculating module 806 is configured to calculate a route corresponding to the multicast member, according to multiple multicast members The corresponding route generates a routing tree.
  • the filtering module 808 is further configured to obtain a member address corresponding to the multicast member, perform address filtering according to the source address and the member address, and obtain a list of member addresses that require multicast packets.
  • the sending module 810 is configured to encapsulate the multicast packet, and deliver the encapsulated multicast packet to the multicast member that needs the multicast packet according to the member address list and the routing tree.
  • the calculation module 806 is further configured to obtain a report message returned by multiple multicast members in the host, calculate a route corresponding to the multicast member according to the report message, and generate a route according to the route corresponding to the multiple multicast members.
  • the topology of the routing tree used by the data plane is used to write the routing tree corresponding to multiple multicast members to the routing tree data structure used by the data plane according to the routing tree topology.
  • the filtering module 808 is further configured to obtain a report message corresponding to the multicast member, where the report message carries the member address, the source address, and the filtering mode; and the plurality of groups are filtered according to the source address and the member address.
  • the address of the broadcast member is filtered to obtain a list of member addresses that require multicast packets.
  • the filtering mode includes a first filtering mode and a first filtering mode
  • the filtering module 808 is further configured to obtain a first filtering list corresponding to the first filtering mode, traverse the first filtering list, and determine the first filtering list. Whether the member address is the same as the source address, if yes, copy the next-level multicast member to the first temporary list; obtain the second filtering list corresponding to the second filtering mode, traverse the second filtering list, and determine the second filtering list. Whether the member address is the same as the source address, and if so, the host corresponding to the next-level multicast member is deleted from the second temporary list; the first temporary list is merged with the second temporary list to obtain the required multicast packet. List of member addresses.
  • the routing tree includes a host routing tree and a multicast group routing tree
  • the calculating module 806 is further configured to traverse the node of the host routing tree to obtain an incremental change message of the host routing tree; The message updates the multicast group routing tree.
  • the computer device further includes: a receiving module 812 and a response module 814, wherein:
  • the receiving module 812 is configured to receive, by the terminal, a multicast group operation instruction sent by the multicast application, The instruction carries the tenant ID, filtering method, filtering address set, and multicast address.
  • the response module 814 is configured to create a multicast group or delete a multicast group according to the tenant ID, the filtering mode, the filtering address set, and the multicast address.
  • a cloud data multicast system including: a multicast gateway cluster 1002, a central controller 1004, and a host 1006, wherein:
  • the multicast gateway cluster 1002 is configured to obtain the multicast packet in the cloud, and the multicast packet carries the tenant ID, the destination address, and the source address, and searches for the corresponding multicast group according to the tenant ID and the destination address.
  • the multicast group includes multiple groups. Broadcast members.
  • the central controller 1004 is configured to calculate a route corresponding to the multicast member, and generate the route according to the multiple multicast members.
  • the multicast gateway cluster 1002 is further configured to acquire a topology of a routing tree used by the data plane, and write a routing tree corresponding to multiple multicast members to a routing tree data structure used by the data plane according to the routing tree topology; and obtain a multicast member.
  • the corresponding member address is filtered by the source address and the member address to obtain a list of member addresses that require multicast packets.
  • the multicast packet is encapsulated, and the encapsulated multicast packet is sent according to the member address list and the routing tree. As for the host corresponding to the member address.
  • the host 1006 is configured to receive the encapsulated multicast packet and deliver the encapsulated multicast packet to the multicast member that needs the multicast packet.
  • the routing tree includes a host routing tree and a multicast group routing tree
  • the central controller 1004 is further configured to traverse the node of the host routing tree to obtain an incremental change message of the host routing tree; The change message updates the multicast group routing tree.
  • the multicast gateway cluster 1002 is further configured to obtain a report message corresponding to the multicast member, where the report message carries the member address, the source address, and the filtering mode, and the source address and the member address are filtered according to the source address. Filter the addresses of multiple multicast members to obtain a list of member addresses that require multicast packets.
  • the multicast gateway cluster 1002 is further configured to obtain a first filtering list corresponding to the first filtering manner, traverse the first filtering list, and determine whether the member address in the first filtering list is Same as the source address, if yes, copy the next-level multicast member to the first temporary list; obtain the second filtering list corresponding to the second filtering mode, traverse the second filtering list, and determine the member address in the second filtering list Whether it is the same as the source address, if yes, the host corresponding to the next-level multicast member is deleted from the second temporary list; the first temporary list is merged with the second temporary list to obtain a list of member addresses requiring multicast packets. .
  • a computer device includes a memory and a processor.
  • the memory stores computer readable instructions.
  • the processor executes the following steps: acquiring a multicast message, and carrying the multicast message
  • the tenant ID, the destination address, and the source address are searched for.
  • the multicast group is configured according to the tenant ID and the destination address.
  • the multicast group includes multiple multicast members.
  • the route corresponding to the multicast member is obtained.
  • Generate a routing tree obtain the member address corresponding to the multicast member, perform address filtering based on the source address and the member address, obtain a list of member addresses that require multicast packets, and encapsulate the multicast packet according to the member address list and routing tree.
  • the encapsulated multicast packet is delivered to the multicast member that needs the multicast packet.
  • the processor is further configured to: obtain a report message returned by the multiple multicast members; calculate a route corresponding to the multicast member according to the report message; and generate a route tree according to the route corresponding to the multiple multicast members; And obtaining the topology of the routing tree used by the data plane, and writing the routing tree corresponding to the multiple multicast members to the routing tree data structure used by the data plane according to the routing tree topology.
  • the routing tree includes a host routing tree and a multicast group routing tree
  • the processor is further configured to: traverse the node of the host routing tree, obtain an incremental change message of the host routing tree; and increase according to the host routing tree.
  • the quantity change message updates the multicast group routing tree.
  • the processor is further configured to: obtain a routing tree topology used by the data plane, and write the host routing tree and the multicast group routing tree into the routing tree data structure used by the data plane according to the routing tree topology; The updated multicast group routing tree is read; and the updated multicast group routing tree is written into the routing tree data structure used by the data plane to complete a routing update.
  • the processor is further configured to: obtain a report message corresponding to the multicast member, where the report message carries the member address, the source address, and the filtering mode; and according to the source address and the member address, the filtering method is Filter the addresses of multicast members to obtain the members that need multicast packets. List of addresses.
  • the filtering method includes a first filtering manner and a second filtering manner
  • the processor is further configured to: obtain a first filtering list corresponding to the first filtering manner, traverse the first filtering list, and determine the first filtering list. Whether the member address is the same as the source address, and if so, copying the next-level multicast member to the first temporary list; obtaining the second filtering list corresponding to the second filtering mode, traversing the second filtering list, and determining the second filtering Whether the member address in the list is the same as the source address, and if so, the host corresponding to the next-level multicast member is deleted from the second temporary list; and the first temporary list is merged with the second temporary list to obtain the required multicast.
  • a list of member addresses for the message is accessed from the second filtering list, and determine the first filtering list.
  • the processor is further configured to: receive a multicast group operation instruction sent by the terminal, where the operation instruction carries a tenant identifier, a filtering mode, a filtering address set, and a multicast address; and according to the tenant identification, filtering mode, Filter the address set and multicast address to create a new multicast group or delete a multicast group.
  • one or more computer readable non-volatile storage media storing computer readable instructions, when executed by one or more processors, cause one or more processors to perform the following Steps: Obtain a multicast packet, the multicast packet carries the tenant ID, the destination address, and the source address.
  • the multicast group includes multiple multicast members based on the tenant ID and the destination address.
  • Routing generating a routing tree based on the routes corresponding to the multiple multicast members; obtaining the member addresses corresponding to the multicast members, performing address filtering based on the source address and the member address, obtaining a list of member addresses requiring multicast packets; and multicasting
  • the packet is encapsulated, and the encapsulated multicast packet is delivered to the multicast member that needs the multicast packet according to the member address list and the routing tree.
  • the one or more processors are further configured to: obtain a report message returned by the multiple multicast members; calculate a route corresponding to the multicast member according to the report message; and according to the route corresponding to the multiple multicast members
  • the routing tree is generated; and the topology of the routing tree used by the data plane is obtained, and the routing tree corresponding to the multiple multicast members is written into the routing tree data structure used by the data plane according to the routing tree topology.
  • the routing tree includes a host routing tree and a multicast group routing tree
  • the one or more processors are further configured to: traverse the node of the host routing tree to obtain an incremental change of the host routing tree. And updating the multicast group routing tree according to the incremental change message of the host routing tree.
  • the one or more processors are further configured to: obtain a routing tree topology used by the data plane, and write the host routing tree and the multicast group routing tree to the routing tree used by the data plane according to the routing tree topology.
  • the updated multicast group routing tree is read; and the updated multicast group routing tree is written into the routing tree data structure used by the data plane to complete a routing update.
  • the one or more processors are further configured to: obtain a report message corresponding to the multicast member, where the report message carries the member address, the source address, and the filtering manner; and according to the source address and the member address.
  • the filtering method filters the addresses of multiple multicast members to obtain a list of member addresses that require multicast packets.
  • the filtering mode includes a first filtering mode and a second filtering mode
  • the one or more processors are further configured to: obtain a first filtering list corresponding to the first filtering mode, traverse the first filtering list, and determine Whether the member address in the first filtering list is the same as the source address, and if so, copying the next-level multicast member to the first temporary list; obtaining the second filtering list corresponding to the second filtering mode, traversing the second filtering list, Determining whether the member address in the second filter list is the same as the source address, and if so, deleting the host corresponding to the next-level multicast member from the second temporary list; and merging the first temporary list with the second temporary list, Obtain a list of member addresses that require multicast packets.
  • the one or more processors are further configured to: receive a multicast group operation instruction sent by the terminal, where the operation instruction carries a tenant identity, a filtering mode, a filtering address set, and a multicast address; and according to the tenant identifier , filtering, filtering address sets, and multicast addresses to create a new multicast group or delete a multicast group.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

一种云端数据组播方法,包括:获取组播报文,所述组播报文中携带了租户标识、目的地址和源地址;根据所述租户标识和目的地址查找对应的组播组,所述组播组包括多个组播成员;获取所述组播成员对应的路由,根据多个组播成员对应的路由生成路由树;获取所述组播成员对应的成员地址,根据所述源地址和成员地址进行地址过滤,得到需要所述组播报文的成员地址清单;将所述组播报文进行封装,根据所述成员地址清单和所述路由树将封装后的组播报文投递给需要所述组播报文的组播成员。

Description

云端数据组播方法、系统和计算机设备
本申请要求于2016年07月13日提交中国专利局,申请号为201610552915.2,发明名称为“云端数据组播方法、装置和系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及云计算技术领域,特别是涉及一种云端数据组播方法、系统和计算机设备。
背景技术
组播技术是IP网络数据传输的方式之一。组播技术有效解决了单点发送多点接收的问题,实现了IP网络中单点到多点的高效数据传输。在云计算中,通过将多个租户进行隔离,可以使得多个租户共用一个数据中心。但是传统的组播技术中没有租户隔离的概念,网络中的任意设备都可以加入到任意的组播组中来接收数据,数据的安全性得不到有效保障。如何在云端实现安全有效的组播成为目前需要解决的一个技术问题。
发明内容
根据本申请的各种实施例,提供一种云端数据组播方法、系统和计算机设备。
一种云端数据组播方法,所述方法包括:
获取云端的组播报文,所述组播报文中携带了租户标识、目的地址和源地址;
根据所述租户标识和目的地址查找对应的组播组,所述组播组包括多个组播成员;
计算所述组播成员对应的路由,根据多个组播成员对应的路由生成路由 树;
获取所述组播成员对应的成员地址,根据所述源地址和成员地址进行地址过滤,得到需要所述组播报文的成员地址清单;及
将所述组播报文进行封装,根据所述成员地址清单和所述路由树将封装后的组播报文投递给需要所述组播报文的组播成员。
一种云端数据组播系统,所述系统包括:
组播网关集群,用于获取云端的组播报文,所述组播报文中携带了租户标识、目的地址和源地址,根据所述租户标识和目的地址查找对应的组播组,所述组播组包括多个组播成员;
中心控制器,用于计算所述组播成员对应的路由,根据多个组播成员对应的路由生成路由树;
所述组播网关集群还用于获取所述组播成员对应的成员地址,根据所述源地址和成员地址进行地址过滤,得到需要所述组播报文的成员地址清单;将所述组播报文进行封装,根据所述成员地址清单和所述路由树将封装后的组播报文发送至于所述成员地址对应的宿主机;
宿主机,用于接收所述封装后的组播报文,并将所述封装后的组播报文传递给需要所述组播报文的组播成员。
一种计算机设备,包括存储器及处理器,所述存储器中储存有计算机可读指令,所述指令被所述处理器执行时,使得所述处理器执行以下步骤:
获取组播报文,所述组播报文中携带了租户标识、目的地址和源地址;
根据所述租户标识和目的地址查找对应的组播组,所述组播组包括多个组播成员;
获取所述组播成员对应的路由,根据多个组播成员对应的路由生成路由树;
获取所述组播成员对应的成员地址,根据所述源地址和成员地址进行地址过滤,得到需要所述组播报文的成员地址清单;及
将所述组播报文进行封装,根据所述成员地址清单和所述路由树将封装 后的组播报文投递给需要所述组播报文的组播成员。
本发明的一个或多个实施例的细节在下面的附图和描述中提出。本发明的其它特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为一个实施例中云端数据组播方法的硬件环境图;
图2为图1中云端的框图;
图3为一个实施例中云端数据组播方法的流程图;
图4为一个实施例中控制平面的架构图;
图5为一个实施例中数据平面的架构图;
图6为一个实施例中组播组路由树的数据结构图;
图7为一个实施例中主机路由树的数据结构图;
图8为一个实施例中计算机设备的框图;
图9为另一个实施例中计算机设备的框图;
图10为一个实施例中云端数据组播系统的框图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
可以理解,本申请所使用的术语“第一”、“第二”等可在本文中用于描述各种元件,但这些元件不受这些术语限制。这些术语仅用于将第一个元件与另一个元件区分。举例来说,在不脱离本申请的范围的情况下,可以将第 一客户端称为第二客户端,且类似地,可将第二客户端称为第一客户端。第一客户端和第二客户端两者都是客户端,但其不是同一客户端。
本申请实施例中提供的云端数据组播方法可以应用于如图1所示的硬件环境中。其中,终端102通过网络连接云端104,云端104通过网络与宿主机106相连。租户通过终端102通过云端104搭建组播应用,并通过组播应用将源数据上传至云端104。云端104获取源数据进行以组播报文的形式进行组播。组播报文中携带了租户标识、目的地址和源地址等。云端104根据租户标识和目的地址查找对应的组播组,计算组播组中多个组播成员对应的路由树。云端104获取多个组播成员对应的成员地址,根据源地址和成员地址进行地址过滤,得到需要组播报文的成员地址清单将组播报文进行封装,根据成员地址清单和路由树将封装后的组播报文发送至于成员地址对应的宿主机106。宿主机106接收封装后的组播报文,并将封装后的组播报文传递给需要组播报文的组播成员。由此实现了在云端进行安全有效的数据组播。
云端可以用独立服务器也可以用多个服务器组成的服务器集群来实现。在一个实施例中,云端为独立服务器,其框图如图2所示。该服务器包括通过系统总线连接的处理器、非易失性存储介质、内存储器和网络接口。其中,该云端的非易失性存储介质存储有操作系统和和计算机可读指令,该计算机可读指令被处理器执行时以实现一种云端数据组播方法。该云端的处理器用于提供计算和控制能力,支撑整个云端的运行。该云端的内存储器为非易失性存储介质中的操作系统和计算机可读指令的运行提供环境。该云端的网络接口用于与外部的终端通过网络连接通信,比如,接收终端发送的源数据以及组播成员投递组播报文等。本领域技术人员可以理解,图2中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的服务器的限定,具体的服务器可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,如图3所示,提供了一种云端数据组播方法,以该方法应用于云端为例进行说明,具体包括:
步骤302,获取云端的组播报文,组播报文中携带了租户标识、目的地址和源地址。
云端包括共有云和私有云。公有云通常指第三方提供商为用户提供的能够使用的云。私有云山指为某一租户单独使用构建的云。租户利用终端中运行的组播应用将源数据上传至云端的数据中心。云端将源数据以组播报文的形式进行组播。组播报文可以采用Overlay(覆盖网)封装方式。Overlay网络是物理网络向云和虚拟化的深度延伸,使得云的资源化能力可以脱离物理网络的多种限制。组播报文中携带了租户标识、目的地址和源地址等。云端根据租户标识可以确定组播报文所属的租户,从而可以确定对应的VPC(Virtual Private Cloud,一种网络逻辑隔离分区服务)。云端可以采用Overlay处理逻辑进行组播,将不同租户的IP(Internet Protocol,网络之间互连协议)地址空间进行了有效隔离,能够有效提高组播数据的安全性。
步骤304,根据租户标识和目的地址查找对应的组播组,组播组包括多个组播成员。
云端根据租户标识和目的地址可以查找对应的组播组。组播组包括多个组播成员。组播成员可以是云端虚拟化平台上的虚拟机。虚拟机依托于宿主机来运行。宿主机上运行的虚拟机可以是分布式的,不同的虚拟机可以加入不同的组播组。虚拟机通过宿主机来接收组播报文。此时的虚拟机(即组播成员)也可以称为数据接收方。组播应用可以有多种,包括多媒体应用、数据仓库和金融应用等。以金融应用中股票数据推送为例,云端接收股票中心的股票数据后,通过组播的方式,将股票数据投递给组播成员。
组播成员也可以作为数据发送方。当组播成员作为数据发送方时,组播成员利用组播应用将需要发送的源数据进行Overlay封装并上传至云端的数据中心。云端按照上述方式将源数据以组播的方式投递给组播组内的其他组播成员。进一步的,数据发送方也可以是非组播成员。非组播成员利用组播应用将需要发送的源数据进行Overlay封装并上传至云端的数据中心。
步骤306,获取组播成员对应的路由,根据多个组播成员对应的路由生 成路由树。
云端的中心控制器根据组播成员反馈的报告报文来计算对应的路由,同一个宿主机上的多个组播成员对应的路由可以生成主机路由树。不同的宿主机可以生成不同的主机路由树。多个宿主机上运行的虚拟机可以加入到同一个组播组中,因此,加入到同一个组播组中的多个宿主机对应的多个主机路由树可以形成组播组路由树。中心控制器计算的路由树包括主机路由树和组播组路由树。
由于宿主机上运行的虚拟机可以随时加入组播组成为组播成员,组播成员也可以随时退出组播组,因此,中心控制器需要实时获取组播成员的变更信息,从而根据组播成员的变化情况实时更新计算组播成员对应的主机路由树,继而更新组播组路由树。
步骤308,获取组播成员对应的成员地址,根据源地址和成员地址进行地址过滤,得到需要组播报文的成员地址清单。
组播成员在加入组播组时,会发送对应的报告报文。报告报文的包头中包括宿主机IP地址,包体中包括指定过滤的源地址。由于组播成员是运行在宿主机的虚拟机,因此宿主机IP地址可以视为组播成员对应的成员地址。根据包头中携带的指定过滤的源地址,可以确定组播成员对应的源地址过滤方式和源地址。其中,源地址过滤方式包括INCLUDE、EXCLUDE和GENERAL。其中,INCLUDE表示只接收从指定组播源发往该组播组的组播报文。EXCLUDE表示只接收从指定组播源之外发送该组播组的组播报文。GENERAL表示常规查询,用于查询组播组内的加入的组播成员和/或离开的组播成员。INCLUDE可以称为第一过滤方式,EXCLUDE可以称为第二过滤方式,GENERAL可以称为第三过滤方式。云端的组播网关集群根据源地址、成员地址和源地址过滤方式对组播成员进行地址过滤,得到需要组播报文的成员地址清单。
步骤310,将组播报文进行封装,根据成员地址清单和路由树将封装后的组播报文投递给需要组播报文的组播成员。
组播网关集群对组播报文支持二层组播转发,并且将组播报文封装为Overlay格式。其中,Overlay格式包括GRE(Generic Routing Encapsulation,通用路由封装协议)封装格式、VXLAN(Virtual Extensible Local Area Network,一种将二层报文用三层协议进行封装的技术)封装格式和NVGRE(通用路由封装)封装格式等。组播网关集群根据成员地址清单获取组播组内宿主机数量,根据宿主机数量确定组播报文需要复制的份数。组播网关集群根据成员地址清单和路由树将封装后的组播报文依照复制的份数投递至需要组播报文的组播成员对应的宿主机。宿主机接收到封装后的组播报文,对封装后的组播报文进行解析,得到解析后的组播报文。宿主机通过Bridge(网桥)将解析后的组播报文传递给该宿主机上运行的组播组内的虚拟机。
传统的IP组播技术中要求所有网络设备都支持组播协议,由此导致网络构建的成本提高。本实施例中,对网络设备无任何要求,不需要对网络设备做升级改造,节省了网络构建的成本。
本实施例中,由于云端的组播报文中携带了租户标识、目的地址和源地址,由此可以根据租户标识和目的地址精确查找到对应的组播组。从而将不同租户的IP地址空间进行了有效隔离。通过获取组播成员对应的成员地址,根据源地址和成员地址进行地址过滤,由此得到需要该组播报文的成员地址清单。通过对组播组内多个组播成员分别计算对应的路由,从而根据多个路由来生成路由树。将组播报文进行封装,由此使得根据成员地址清单和路由树转发的封装后的组播报文能够被准确投递给需要该组播报文的组播成员,确保了组播数据投递的准确性。从而实现了在云端进行安全有效的数据组播。
在一个实施例中,计算组播成员对应的路由,根据多个组播成员对应的路由生成路由树的步骤包括:获取多个组播成员返回的报告报文;根据报告报文计算组播成员对应的路由;根据多个组播成员对应的路由生成路由树;获取数据平面使用的路由树的拓扑结构,根据路由树拓扑结构将多个组播成员对应的路由树写入数据平面使用的路由树数据结构中。
IP组播技术通过控制平面来控制和管理所有网络协议的运行,通过数据 平面来处理和转发不同接口上的数据。在传统的IP组播技术中,由于控制平面和数据平面没有分离,运行在相同的设备上,导致单点设备的复杂性提高,可靠性降低,以及运维和排障的难度加大。
本实施例中,云端将控制平面和数据平面完全分离,两者运行在不同的设备上。其中,控制平面运行在中心控制器上,数据平面运行在组播网关集群上。控制平面的架构图如图4所示。宿主机上运行了虚拟机和组播代理等。中心控制器与组播代理通过网络建立通讯。组播网关集群中运行了组播网关代理和内核。组播网关集群负责数据平面的数据转发。具体的,组播网关集群可以通过通用服务器linux内核中定制Overlay处理逻辑来实现数据平面的数据转发。中心控制器通过网络与组播网关代理建立通讯。其中,组播代理遵循IGMP(Internet Group Management Protocol,Internet组管理协议)协议,周期性的向宿主机上所有虚拟机发送常规查询,加入了组播的虚拟机会回复报告报文,组播代理获取报告报文。其中,IMGP协议的应用范围不超出宿主机,组播代理获取报告报文在查询宿主机的路由之前进行。
分布式的组播代理将本宿主机中组播成员对应的报告报文发送至中心控制器,中心控制器根据报告报文可以计算组播成员对应的路由,继而根据多个组播成员对应的路由生成主机路由树和组播组路由树。组播网关集群上运行的组播网关代理读取主机路由树和组播组路由树,采用数据平面使用的路由树的拓扑结构,将读取到的主机路由树和组播组路由树写入数据平面使用的路由树数据结构中。由于宿主机上运行的虚拟机可以随时加入组播组成为组播成员,组播成员也可以随时退出组播组,因此,组播代理将实时获取到组播成员的变更信息发送至中心控制器。中心控制器根据组播成员的变更信息对主机路由树和组播组路由树进行更新计算。组播网关集群上运行的组播网关代理读取更新后的主机路由树和更新后的组播组路由树,采用数据平面使用的路由树的拓扑结构,将读取到的更新后的主机路由树和更新后的组播组路由树写入数据平面使用的路由树数据结构中。从而完成一次路由更新。由于主机路由树和组播组路由树是根据组播成员的变化情况实时更新的,因 此,能够确保组播网关集群组播报文进行准确转发。
进一步的,组播代理与中心控制器之间还可以通过中间件来建立通讯。中心控制器与组播网关代理之间也可以通过中间件建立通讯。组播网关集群也可以使用专用的网络开发套件来实现数据平面的数据转发。
以组播代理与中心控制器之间的中间件为消息队列,中心控制器与组播网关代理之间的中间件为Zookeeper(一个分布式的,开放源码的分布式应用程序协调服务)为例,对路由更新的过程进行补充说明。分布式组播代理获取多个虚拟机回复的报告报文,根据报告报文可以得知本宿主机上多个组播成员的变化情况。分布式组播代理将本宿主机上的组播成员的变更信息发送至消息队列中间件。独立部署的中心控制器从消息队列中间件中读取组播成员的变更信息,根据变更信息计算更新后的主机路由树和更新后的组播组路由树,并将更新后的更新后的主机路由树和更新后的组播组路由树写入Zookeeper中间件。组播网关代理从Zookeeper读取更新后的主机路由树和更新后的组播组路由树,采用数据平面使用的路由树的拓扑结构,将读取到的更新后的主机路由树和更新后的组播组路由树写入数据平面使用的路由树数据结构中。从而完成一次路由更新。
数据平面的架构如图5所示。其中,数据接收设备502用于接收租户通过组播应用上传的组播报文。组播网关集群504用于对数据面的组播报文进行复制并转发至需要该组播报文的组播成员对应的宿主机506。其中,组播网关集群504可以采用多个通用服务器来实现组播报文的转发。
一个数据中心使用一组组播网关(Multicast Gateway,简称MCGW,承担组播数据报文的复制和转发的设备)集群。组播网关集群中包括多台转发设备。转发设备可以采用通用服务器。每一台转发设备在逻辑上述完全对等的,每个组播报文在任意一台转发设备上都能得到相同的处理结果。转发设备在进行转发时采用树状的路由数据结构。具体的,转发设备可以采用组播组数对组播数据进行转发。
本实施例中,控制平面和数据平面运行在不同的设备上,被完全分离。 由此降低了单点设备的复杂性,提高了其可靠性,同时降低了运维和排障的难度。
在一个实施例中,路由树包括主机路由树和组播组路由树,在计算组播成员对应的路由,根据多个组播成员对应的路由生成路由树的步骤之后,还包括:遍历主机路由树的节点,获取主机路由树的增量变化消息;根据主机路由树的增量变化消息对组播组路由树进行更新计算。
本实施例中,中心控制器通过一个或多个进程对组播成员的路由进行计算。同一个租户标识对应的路由变化由同一个进程计算处理。一个进程可以处理多个租户标识对应的路由。多个组播成员对应的路由可以采用树状数据结构的拓扑图来表示,同一个宿主机上的多个组播成员对应的路由可以组成主机路由树。加入到同一个组播组中的多个宿主机对应的多个主机路由树可以形成组播组路由树。组播组路由树的数据结构图如图6所示,组播组路由树的根节点对应VPC,根据组播报文可以定位查找到对应的根节点。叶子节点对应组播报文需要被发送到的宿主机。由于组播报文的传输需要通过物理网络来实现,因此,叶子节点可以只对应宿主机,无需对应宿主机上运行的虚拟机。每个组播组都具有对应的组播地址。组播组路由树中每个组播地址下设有三种源地址过滤方式,包括INCLUDE、EXCLUDE和GENERAL。根据源地址和成员地址按照源地址过滤方式对多个组播成员的地址进行过滤,得到需要组播报文的成员地址清单。在图6中,VPCID表示VPC标识(即租户标识)、GROUP表示组播地址、IP表示源地址、HOSTIP表示组播成员。
根据组播代理所上报的组播成员对应的报告报文和查询报文,中心控制器可以将宿主机中加入同一组播组的多个组播成员之间的关系采用主机路由树来表示。主机路由树的数据结构图如图7所示,主机路由树的根节点对应VPC,一级子节点为宿主机地址(即成员地址),二级子节点为组播地址。组播地址下面设有三种源地址过滤方式,包括INCLUDE、EXCLUDE和GENERAL。子节点为源地址。通过将源地址和成员地按照对应的过滤方式进行地址过滤,从而确保组播成员得到所需要的组播报文。
当主机路由树中的根节点或子节点发生变化时,中心控制器遍历主机路由树的节点,获取主机路由树的增量变化消息,根据这些增量变化消息对组播组路由树进行更新计算。
具体的,中心控制器通过一个或多个进程遍历主机路由树的节点。获取主机路由树对应的内存映像,主机路由树对应的内存映像中存储了主机路由树的数据结构,根据已存储的主机路由树的数据结构来判断是否有新增或删除的成员地址。若有,则更新内存映像中的成员地址。
中心控制器判断是否有成员地址子节点新增或删除。具体的,中心控制器通过一个或多个进程根据组播组对应的内存映射来判断是否有新增或者删除的组播组。当存在新增的组播组是,判断是否存在GENERAL子节点,若否,则新增V2记录组。其中,V2是指IGMP协议V2版本。执行遍历EXCLUDE清单以及执行遍历INCLUDE清单,并且将组播组的组播成员计数加一。从而对新增的组播组完成一次遍历。
在新增V2记录组时,如果组播组路由树对应的内存映像中组播地址节点不存在,则调用新建组播组。如果组播组中的组播成员已达上限,则返回。在中心控制器上或者在Zookeeper中间件上创建成员地址子节点,该成员地址子节点可以描述为MCGroup/$vpcid/$group/INCLUDE/$ip/$hostip节点。其中,MCGroup表示组播组、vpcid表示租户标识、group表示组播地址、ip表示源地址、hostip表示成员地址。将所有的主机路由树对应的内存映像中INCLUDE清单中的IP,删除成员地址子节点,其中,被删除的成员地址子节点可以描述为:MCGroup/$vpcid/$group/INCLUDE/$ip/$hostip节点。并且将所有的主机路由树对应的内存映像中EXCLUDE清单中的源地址,删除成员地址子节点,其中,被删除的成员地址子节点可以描述为:MCGroup/$vpcid/$group/EXCLUDE/$ip/$hostip节点。
当存在删除的组播组时,删除GENERAL子节点下的成员地址节点。被删除的成员地址子节点可以描述为:MCGroup/$vpcid/$group/GENERAL/$hostip节点。对所有主机路由树对应的内 存映像中INCLUDE清单中的源地址,删除INCLUDE节点下的成员子地址节点,其中,被删除的成员地址子节点可以被描述为:MCGroup/$vpcid/$group/INCLUDE/$ip/$hostip节点。对所有主机路由树对应的内存映像中EXCLUDE清单中的源地址,删除EXCLUDE节点下的成员地址子节点,其中,被删除的成员地址子节点可以被描述为:MCGroup/$vpcid/$group/EXCLUDE/$ip/$hostip节点。删除主机路由树对应的内存映像中的数据以及组播组路由树对应的内存映像中的数据,并且将组播组的计数减一。
本实施例中,当主机路由树中的根节点或子节点发生变化时,通过遍历主机路由树的节点,获取主机路由树的增量变化消息,根据这些增量变化消息对组播组路由树进行更新计算。从而能够得到准确的路由树,进而确保了组播数据投递的准确性。
在一个实施例中,方法还包括:方法还包括:接收终端通过组播应用发送的组播组操作指令,操作指令中携带了租户标识、过滤方式、过滤地址集合和组播地址;根据租户标识、过滤方式、过滤地址集合以及组播地址新建组播组或者删除组播组。
本实施例中,租户可以通过终端中运行的组播应用向云端发送组播操作指令,组播操作指令包括新建组播组合删除组播组等。当云端接收到的组播组操作指令是新建组播组指令时,新建组播组指令中携带了租户标识、过滤方式、过滤地址集合和组播地址。其中过滤地址集合中包括与过滤类型对应的源地址。根据新建组播组指令中携带的租户标识和组播地址创建组播地址节点,并配置相应的过滤类型。过滤类型包括INCLUDE、EXCLUDE和GENERAL。创建与过滤类型对应的源地址子节点。根据过滤类型在过滤地址集合中获取对应的源地址,并将与过滤类型对应的源地址配置给源地址子节点。并将租户标识对应的组播地址计数加一。由此完成一个组播组的新建工作。
当云端接收到的组播组操作指令是删除组播组指令时,删除组播组指令 中携带了租户标识、过滤方式、过滤地址集合和组播地址。根据过滤方式,遍历过滤地址,根据租户标识和组播地址删除与过滤方式对应的源地址子节点。将主机路由树对应的内存映像中的中组播地址节点下的子节点以及组播地址节点删除。同时将租户标识对应的组播地址计数减一。由此完成一个组播组的删除工作。
本实施例中,云端通过接收终端发送的组播组操作指令来触发对组播组的新建或删除等操作。由此方便租户根据自身需要来适当调整相应的组播组,为租户提供了方便。
在一个实施例中,根据源地址和成员地址进行地址过滤,得到需要组播报文的成员地址清单步骤包括:获取组播成员对应的报告报文,报告报文中携带了成员地址、源地址和过滤方式;根据源地址和成员地址按照过滤方式对多个组播成员的地址进行过滤,得到需要组播报文的成员地址清单。
本实施例中,组播成员在加入组播组时,会发送对应的报告报文。报告报文的包头中包括宿主机IP地址,包体中包括指定过滤的源地址和过滤方式。组播网关集群在根据租户标识和目的地址查找到对应的组播组之后,创建临时清单。其中临时清单包括第一临时清单和第二临时清单。组播网关集群根据源地址和成员地址按照过滤方式对多个组播成员的地址进行过滤,得到需要组播报文的成员地址清单。
在一个实施例中,根据源地址和成员地址按照过滤方式对多个组播成员的地址进行过滤,得到需要组播报文的成员地址清单步骤包括:获取与第一过滤方式对应的第一过滤清单,遍历第一过滤清单,判断第一过滤清单中的成员地址是否与源地址相同,若是,则将下一级组播成员复制到第一临时清单;获取与第二过滤方式对应的第二过滤清单,遍历第二过滤清单,判断第二过滤清单中的成员地址是否与源地址相同,若是,则将下一级组播成员对应的宿主机从第二临时清单中删除;将第一临时清单与第二临时清单合并,得到需要组播报文的成员地址清单。
组播网关集群根据过滤方式为INCLUDE的多个组播成员对应的成员地 址生成INCLUDE清单,根据过滤方式为EXCLUDE的多个组播成员对应的成员地址生成EXCLUDE清单,过滤方式为GENERAL的多个组播成员对应的成员地址生成GENERAL哈希清单。INCLUDE可以称为第一过滤方式,EXCLUDE可以称为第二过滤方式,GENERAL可以称为第三过滤方式。相应的INCLUDE清单可以称为第一过滤清单,EXCLUDE清单可以称为第二过滤清单,GENERAL哈希清单可以称为第三过滤清单。
组播网关集群遍历INCLUDE清单时,判断INCLUDE清单中成员地址是否与源地址相同,若是,则将下一级宿主机成员复制到第一临时清单中。将GENERAL哈希列表中的宿主机成员复制到第二临时清单中。组播网关集群遍历EXCLUDE清单,判断EXCLUDE成员地址是否与源地址相同,若是,则将下一级宿主机成员从第二临时清单中删除。在EXCLUDE清单遍历完之后,将第二临时清单连接到第一临时清单的尾部,生成需要组播报文的成员地址清单。
由于组播成员可以随时加入或退出,因此,在遍历INCLUDE清单时和/或遍历EXCLUDE清单时需要获取主机路由树中子节点的变更信息,根据主机路由树中子节点的变更信息进行更新。
具体的,在遍历INCLUDE清单时,获取宿主机对应的内存映像,若主机路由树对应的内存映像中组播组的general标记为真,则表示子节点没有变化,可以不予处理。
获取组播组对应的内存映像,若组播组对应的内存映像中组播地址节点不存在,则需要新建组播组。其中,组播地址节点可以描述为:MCGroup/$vpcid/$group。调用新建组播组函数,将所有INCLUDE子节点输入至地址过滤清单中。其中INCLUDE子节点可以描述为:MCHost/$vpcid/$hostip/$group/INCLUDE子节点。
判断是否有新增或删除的源地址,如果有,则将主机路由树对应的内存映像进行更新。当存在新增的源地址时,如果组播组路由树对应的内存映像这一层级数据中不存在该新增的源地址,则需要调用新建组播组函数,并在 新建的组播组中输入该新增的源地址。如果组播组路由树对应的内存映像中EXCLUDE清单中存在该新增的源地址,并且EXCLUDE清单中存在相同的成员地址,则该成员地址需要按照IGMP协议V2版本进行处理。根据新增的源地址在组播组路由树对应的内存映像进行更新,并且创建相应的成员地址子节点。该成员地址子节点可以被描述为:MCGroup/$vpcid/$group/EXCLUDE/$ip/$hostip节点。
当存在删除的源地址时,更新组播组路由树对应的内存映像中的数据,并且删除EXCLUDE子节点下相应的成员地址子节点,该成员地址子节点可以描述为:MCGroup/$vpcid/$group/EXCLUDE/$ip/$hostip节点。若该成员地址节点为空,则调用删除组播组函数,输入该删除的源地址,以此将该源地址删除。
组播网关集群遍历EXCLUDE清单时,若主机路由树对应的内存映像中组播组的general标记为真,则不予处理。若组播组对应的内存映像中组播地址节点不存在,则需要新建组播组。其中,组播地址节点可以描述为:MCGroup/$vpcid/$group。调用新建组播组函数,将所有EXCLUDE子节点输入至地址过滤清单中。其中EXCLUDE子节点可以描述为:MCHost/$vpcid/$hostip/$group/EXCLUDE子节点。
判断是否有新增或删除的源地址,如果有,则将主机路由树对应的内存映像进行更新。当存在新增的源地址时,则需要调用新建组播组函数,并在新建的组播组中输入该新增的源地址。根据新增的源地址在组播组路由树对应的内存映像进行更新,并且创建相应的成员地址子节点。该成员地址子节点可以被描述为:MCGroup/$vpcid/$group/GENERAL/$hostip节点。如果在组播组路由树对应的内存映像中EXCLUDE子节点为空,则创建EXCLUDE子节点下的成员地址子节点,该成员地址子节点可以描述为:MCGroup/$vpcid/$group/EXCLUDE/$ip/$hostip节点。否则,删除EXCLUDE子节点下的其他源地址对应的成员地址子节点,被删除的成员地址子节点可以描述为:MCGroup/$vpcid/$group/EXCLUDE/其他源地址/$hostip节点。
当存在删除的源地址时,更新组播组路由树对应的内存映像中的数据,并且删除EXCLUDE子节点下相应的成员地址子节点,该成员地址子节点可以描述为:MCGroup/$vpcid/$group/EXCLUDE/$ip/$hostip节点。若该成员地址子节点为空,则调用删除组播组函数,输入该删除的源地址,以此将该源地址删除。
本实施例中,在对INCLUDE清单进行遍历以及在对EXCLUDE清单进行遍历时,通过对主机路由树中子节点的增减状况进行及时更新,确保了地址过滤的准确性,进而提高组播报文投递的准确性。
在一个实施例中,中心控制器对组播组的规模进行控制。具体的,中心控制器获取Group树中叶子节点的数量,判断叶子节点的数量是否小于或等于租户标识对应的叶子节点数量。若否,则限制该组不组中加入组播成员。在公有云中,如果某些租户对应的叶子节点数量过多,则会占用过多的资源,导致资源利用不均衡。
由于组播成员频繁的加入或退出组播组,会造成资源过度消耗。在一个实施例中,中心控制器还可以对组播成员的频繁加入或退出进行限制。具体的,中心控制器获取预设时间内组播成员加入组播组或退出组播组的次数,若超过设定的次数,则限制该组播成员再次加入组播组或退出组播组。从而有效减少资源消耗。
如图8所示,本申请还提供了一种计算机设备,该计算机设备的内部结构可对应于如图2所示的结构,下述每个模块可全部或部分通过软件、硬件或其组合来实现。在一个实施例中,计算机设备包括:获取模块802、查找模块804、计算模块806、过滤模块808和发送模块810,其中:
获取模块802,用于获取云端的组播报文,组播报文中携带了租户标识、目的地址和源地址。
查找模块804,用于根据租户标识和目的地址查找对应的组播组,组播组包括多个组播成员。
计算模块806,用于计算所述组播成员对应的路由,根据多个组播成员 对应的路由生成路由树。
过滤模块808,还用于获取组播成员对应的成员地址,根据源地址和成员地址进行地址过滤,得到需要组播报文的成员地址清单。
发送模块810,用于将组播报文进行封装,根据成员地址清单和路由树将封装后的组播报文投递给需要组播报文的组播成员。
在一个实施例中,计算模块806还用于获取宿主机中多个组播成员返回的报告报文;根据报告报文计算组播成员对应的路由;根据多个组播成员对应的路由生成路由树;取数据平面使用的路由树的拓扑结构,根据路由树拓扑结构将多个组播成员对应的路由树写入数据平面使用的路由树数据结构中。
在一个实施例中,过滤模块808还用于获取组播成员对应的报告报文,报告报文中携带了成员地址、源地址和过滤方式;根据源地址和成员地址按照过滤方式对多个组播成员的地址进行过滤,得到需要组播报文的成员地址清单。
在一个实施例中,过滤方式包括第一过滤方式和第一过滤方式,过滤模块808还用于获取与第一过滤方式对应的第一过滤清单,遍历第一过滤清单,判断第一过滤清单中的成员地址是否与源地址相同,若是,则将下一级组播成员复制到第一临时清单;获取与第二过滤方式对应的第二过滤清单,遍历第二过滤清单,判断第二过滤清单中的成员地址是否与源地址相同,若是,则将下一级组播成员对应的宿主机从第二临时清单中删除;将第一临时清单与第二临时清单合并,得到需要组播报文的成员地址清单。
在一个实施例中,路由树包括主机路由树和组播组路由树,计算模块806还用于遍历主机路由树的节点,获取主机路由树的增量变化消息;根据主机路由树的增量变化消息对组播组路由树进行更新计算。
在一个实施例中,如图9所示,该计算机设备还包括:接收模块812和响应模块814,其中:
接收模块812,用于接收终端通过组播应用发送的组播组操作指令,操 作指令中携带了租户标识、过滤方式、过滤地址集合和组播地址。
响应模块814,用于根据租户标识、过滤方式、过滤地址集合以及组播地址新建组播组或者删除组播组。
在一个实施例中,如图10所示,提供了一种云端数据组播系统,包括:组播网关集群1002、中心控制器1004和宿主机1006,其中:
组播网关集群1002,用于获取云端的组播报文,组播报文中携带了租户标识、目的地址和源地址,根据租户标识和目的地址查找对应的组播组,组播组包括多个组播成员。
中心控制器1004,用于计算组播成员对应的路由;根据多个组播成员对应的路由生成。
组播网关集群1002还用于获取数据平面使用的路由树的拓扑结构,根据路由树拓扑结构将多个组播成员对应的路由树写入数据平面使用的路由树数据结构中;获取组播成员对应的成员地址,根据源地址和成员地址进行地址过滤,得到需要组播报文的成员地址清单;将组播报文进行封装,根据成员地址清单和路由树将封装后的组播报文发送至于成员地址对应的宿主机。
宿主机1006,用于接收封装后的组播报文,并将封装后的组播报文传递给需要组播报文的组播成员。
在一个实施例中,路由树包括主机路由树和组播组路由树,中心控制器1004还用于遍历主机路由树的节点,获取主机路由树的增量变化消息;根据主机路由树的增量变化消息对组播组路由树进行更新计算。
在一个实施例中,组播网关集群1002还用于获取组播成员对应的报告报文,报告报文中携带了成员地址、源地址和过滤方式;根据源地址和成员地址按照源地址过滤方式对多个组播成员的地址进行过滤,得到需要组播报文的成员地址清单。
在一个实施例中,组播网关集群1002还用于获取与第一过滤方式对应的第一过滤清单,遍历第一过滤清单,判断第一过滤清单中的成员地址是否 与源地址相同,若是,则将下一级组播成员复制到第一临时清单;获取与第二过滤方式对应的第二过滤清单,遍历第二过滤清单,判断第二过滤清单中的成员地址是否与源地址相同,若是,则将下一级组播成员对应的宿主机从第二临时清单中删除;将第一临时清单与第二临时清单合并,得到需要组播报文的成员地址清单。
在一个实施例中,一种计算机设备,包括存储器及处理器,存储器中储存有计算机可读指令,指令被处理器执行时,使得处理器执行以下步骤:获取组播报文,组播报文中携带了租户标识、目的地址和源地址;根据租户标识和目的地址查找对应的组播组,组播组包括多个组播成员;获取组播成员对应的路由,根据多个组播成员对应的路由生成路由树;获取组播成员对应的成员地址,根据源地址和成员地址进行地址过滤,得到需要组播报文的成员地址清单;及将组播报文进行封装,根据成员地址清单和路由树将封装后的组播报文投递给需要组播报文的组播成员。
在一个实施例中,处理器还用于执行:获取多个组播成员返回的报告报文;根据报告报文计算组播成员对应的路由;根据多个组播成员对应的路由生成路由树;及获取数据平面使用的路由树的拓扑结构,根据路由树拓扑结构将多个组播成员对应的路由树写入数据平面使用的路由树数据结构中。
在一个实施例中,路由树包括主机路由树和组播组路由树,处理器还用于执行:遍历主机路由树的节点,获取主机路由树的增量变化消息;及根据主机路由树的增量变化消息对组播组路由树进行更新。
在一个实施例中,处理器还用于执行:获取数据平面使用的路由树拓扑结构,根据路由树拓扑结构将主机路由树和组播组路由树写入数据平面使用的路由树数据结构中;读取更新后的组播组路由树;及将更新后的组播组路由树写入数据平面使用的路由树数据结构中,完成一次路由更新。
在一个实施例中,处理器还用于执行:获取组播成员对应的报告报文,报告报文中携带了成员地址、源地址和过滤方式;及根据源地址和成员地址按照过滤方式对多个组播成员的地址进行过滤,得到需要组播报文的成员地 址清单。
在一个实施例中,过滤方式包括第一过滤方式和第二过滤方式,处理器还用于执行:获取与第一过滤方式对应的第一过滤清单,遍历第一过滤清单,判断第一过滤清单中的成员地址是否与源地址相同,若是,则将下一级组播成员复制到第一临时清单;获取与第二过滤方式对应的第二过滤清单,遍历第二过滤清单,判断第二过滤清单中的成员地址是否与源地址相同,若是,则将下一级组播成员对应的宿主机从第二临时清单中删除;及将第一临时清单与第二临时清单合并,得到需要组播报文的成员地址清单。
在一个实施例中,处理器还用于执行:接收终端发送的组播组操作指令,操作指令中携带了租户标识、过滤方式、过滤地址集合和组播地址;及根据租户标识、过滤方式、过滤地址集合以及组播地址新建组播组或者删除组播组。
在一个实施例中,一个或多个存储有计算机可读指令的计算机可读非易失性存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行以下步骤:获取组播报文,组播报文中携带了租户标识、目的地址和源地址;根据租户标识和目的地址查找对应的组播组,组播组包括多个组播成员;获取组播成员对应的路由,根据多个组播成员对应的路由生成路由树;获取组播成员对应的成员地址,根据源地址和成员地址进行地址过滤,得到需要组播报文的成员地址清单;及将组播报文进行封装,根据成员地址清单和路由树将封装后的组播报文投递给需要组播报文的组播成员。
在一个实施例中,一个或多个处理器还用于执行:获取多个组播成员返回的报告报文;根据报告报文计算组播成员对应的路由;根据多个组播成员对应的路由生成路由树;及获取数据平面使用的路由树的拓扑结构,根据路由树拓扑结构将多个组播成员对应的路由树写入数据平面使用的路由树数据结构中。
在一个实施例中,路由树包括主机路由树和组播组路由树,一个或多个处理器还用于执行:遍历主机路由树的节点,获取主机路由树的增量变化消 息;及根据主机路由树的增量变化消息对组播组路由树进行更新。
在一个实施例中,一个或多个处理器还用于执行:获取数据平面使用的路由树拓扑结构,根据路由树拓扑结构将主机路由树和组播组路由树写入数据平面使用的路由树数据结构中;读取更新后的组播组路由树;及将更新后的组播组路由树写入数据平面使用的路由树数据结构中,完成一次路由更新。
在一个实施例中,一个或多个处理器还用于执行:获取组播成员对应的报告报文,报告报文中携带了成员地址、源地址和过滤方式;及根据源地址和成员地址按照过滤方式对多个组播成员的地址进行过滤,得到需要组播报文的成员地址清单。
在一个实施例中,过滤方式包括第一过滤方式和第二过滤方式,一个或多个处理器还用于执行:获取与第一过滤方式对应的第一过滤清单,遍历第一过滤清单,判断第一过滤清单中的成员地址是否与源地址相同,若是,则将下一级组播成员复制到第一临时清单;获取与第二过滤方式对应的第二过滤清单,遍历第二过滤清单,判断第二过滤清单中的成员地址是否与源地址相同,若是,则将下一级组播成员对应的宿主机从第二临时清单中删除;及将第一临时清单与第二临时清单合并,得到需要组播报文的成员地址清单。
在一个实施例中,一个或多个处理器还用于执行:接收终端发送的组播组操作指令,操作指令中携带了租户标识、过滤方式、过滤地址集合和组播地址;及根据租户标识、过滤方式、过滤地址集合以及组播地址新建组播组或者删除组播组。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一非易失性计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)等。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于 本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (21)

  1. 一种云端数据组播方法,包括:
    获取组播报文,所述组播报文中携带了租户标识、目的地址和源地址;
    根据所述租户标识和目的地址查找对应的组播组,所述组播组包括多个组播成员;
    获取所述组播成员对应的路由,根据多个组播成员对应的路由生成路由树;
    获取所述组播成员对应的成员地址,根据所述源地址和成员地址进行地址过滤,得到需要所述组播报文的成员地址清单;及
    将所述组播报文进行封装,根据所述成员地址清单和所述路由树将封装后的组播报文投递给需要所述组播报文的组播成员。
  2. 根据权利要求1所述的方法,其特征在于,所述获取所述组播成员对应的路由,根据多个组播成员对应的路由生成路由树包括:
    获取多个组播成员返回的报告报文;
    根据所述报告报文计算组播成员对应的路由;
    根据多个组播成员对应的路由生成路由树;及
    获取数据平面使用的路由树的拓扑结构,根据所述路由树拓扑结构将所述多个组播成员对应的路由树写入数据平面使用的路由树数据结构中。
  3. 根据权利要求1所述的方法,其特征在于,所述路由树包括主机路由树和组播组路由树,在所述获取所述组播成员对应的路由,根据多个组播成员对应的路由生成路由树之后,所述方法还包括:
    遍历主机路由树的节点,获取主机路由树的增量变化消息;及
    根据所述主机路由树的增量变化消息对组播组路由树进行更新。
  4. 根据权利要求3所述的方法,其特征在于,还包括:
    获取数据平面使用的路由树拓扑结构,根据所述路由树拓扑结构将所述主机路由树和组播组路由树写入数据平面使用的路由树数据结构中;
    读取更新后的组播组路由树;及
    将更新后的组播组路由树写入数据平面使用的路由树数据结构中,完成一次路由更新。
  5. 根据权利要求1所述的方法,其特征在于,所述根据所述源地址和成员地址进行地址过滤,得到需要所述组播报文的成员地址清单包括:
    获取所述组播成员对应的报告报文,所述报告报文中携带了成员地址、源地址和过滤方式;及
    根据所述源地址和成员地址按照所述过滤方式对多个组播成员的地址进行过滤,得到需要所述组播报文的成员地址清单。
  6. 根据权利要求5所述的方法,其特征在于,所述过滤方式包括第一过滤方式和第二过滤方式,所述根据所述源地址和成员地址按照所述过滤方式对多个组播成员的地址进行过滤,得到需要所述组播报文的成员地址清单包括:
    获取与所述第一过滤方式对应的第一过滤清单,遍历所述第一过滤清单,判断所述第一过滤清单中的成员地址是否与源地址相同,若是,则将下一级组播成员复制到第一临时清单;
    获取与所述第二过滤方式对应的第二过滤清单,遍历所述第二过滤清单,判断所述第二过滤清单中的成员地址是否与源地址相同,若是,则将下一级组播成员对应的宿主机从第二临时清单中删除;及
    将所述第一临时清单与第二临时清单合并,得到需要所述组播报文的成员地址清单。
  7. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    接收终端发送的组播组操作指令,操作指令中携带了租户标识、过滤方式、过滤地址集合和组播地址;及
    根据所述租户标识、过滤方式、过滤地址集合以及组播地址新建组播组或者删除组播组。
  8. 一种云端数据组播系统,包括:
    组播网关集群,用于获取云端的组播报文,所述组播报文中携带了租户标识、目的地址和源地址,根据所述租户标识和目的地址查找对应的组播组,所述组播组包括多个组播成员;
    中心控制器,用于计算所述组播成员对应的路由,根据多个组播成员对应的路由生成路由树;
    所述组播网关集群还用于获取所述组播成员对应的成员地址,根据所述源地址和成员地址进行地址过滤,得到需要所述组播报文的成员地址清单,将所述组播报文进行封装,根据所述成员地址清单和所述路由树将封装后的组播报文发送至于所述成员地址对应的宿主机;
    所述宿主机,用于接收所述封装后的组播报文,并将所述封装后的组播报文传递给需要所述组播报文的组播成员。
  9. 根据权利要求8所述的系统,其特征在于,所述宿主机还用于获取组播成员对应的报告报文,将所述报告报文发送至中心控制器;所述中心控制器还用于根据所述报告报文计算组播成员对应的路由,根据多个组播成员对应的路由生成路由树;所述组播网关集群还用于获取数据平面使用的路由树的拓扑结构,根据所述路由树拓扑结构将所述多个组播成员对应的路由树写入数据平面使用的路由树数据结构中。
  10. 根据权利要求8所述的系统,其特征在于,所述路由树包括主机路由树和组播组路由树,所述中心控制器还用于遍历主机路由树的节点,获取主机路由树的增量变化消息;根据所述主机路由树的增量变化消息对组播组路由树进行更新。
  11. 根据权利要求10所述的系统,其特征在于,所述组播网关集群还用于获取数据平面使用的路由树拓扑结构,根据所述路由树拓扑结构将所述主机路由树和组播组路由树写入数据平面使用的路由树数据结构中,读取更新后的组播组路由树,将更新后的组播组路由树写入数据平面使用的路由树数据结构中,完成一次路由更新。
  12. 根据权利要求8所述的系统,其特征在于,所述组播网关集群还用 于获取所述组播成员对应的报告报文,所述报告报文中携带了成员地址、源地址和过滤方式;根据所述源地址和成员地址按照源地址过滤方式对多个组播成员的地址进行过滤,得到需要所述组播报文的成员地址清单。
  13. 根据权利要求12所述的系统,其特征在于,所述组播网关集群还用于获取与所述第一过滤方式对应的第一过滤清单,遍历所述第一过滤清单,判断所述第一过滤清单中的成员地址是否与源地址相同,若是,则将下一级组播成员复制到第一临时清单;获取与所述第二过滤方式对应的第二过滤清单,遍历所述第二过滤清单,判断所述第二过滤清单中的成员地址是否与源地址相同,若是,则将下一级组播成员对应的宿主机从第二临时清单中删除;将所述第一临时清单与第二临时清单合并,得到需要所述组播报文的成员地址清单。
  14. 根据权利要求8所述的系统,其特征在于,所述组播网关集群还用于接收终端发送的组播组操作指令,操作指令中携带了租户标识、过滤方式、过滤地址集合和组播地址;所述中心控制器还用于根据所述租户标识、过滤方式、过滤地址集合以及组播地址新建组播组或者删除组播组。
  15. 一种计算机设备,包括存储器及处理器,所述存储器中储存有计算机可读指令,所述指令被所述处理器执行时,使得所述处理器执行以下步骤:
    获取组播报文,所述组播报文中携带了租户标识、目的地址和源地址;
    根据所述租户标识和目的地址查找对应的组播组,所述组播组包括多个组播成员;
    获取所述组播成员对应的路由,根据多个组播成员对应的路由生成路由树;
    获取所述组播成员对应的成员地址,根据所述源地址和成员地址进行地址过滤,得到需要所述组播报文的成员地址清单;及
    将所述组播报文进行封装,根据所述成员地址清单和所述路由树将封装后的组播报文投递给需要所述组播报文的组播成员。
  16. 根据权利要求15所述的计算机设备,其特征在于,所述处理器还用 于执行:
    获取多个组播成员返回的报告报文;
    根据所述报告报文计算组播成员对应的路由;
    根据多个组播成员对应的路由生成路由树;及
    获取数据平面使用的路由树的拓扑结构,根据所述路由树拓扑结构将所述多个组播成员对应的路由树写入数据平面使用的路由树数据结构中。
  17. 根据权利要求15所述的计算机设备,其特征在于,所述路由树包括主机路由树和组播组路由树,所述处理器还用于执行:
    遍历主机路由树的节点,获取主机路由树的增量变化消息;及
    根据所述主机路由树的增量变化消息对组播组路由树进行更新。
  18. 根据权利要求17所述的计算机设备,其特征在于,所述处理器还用于执行:
    获取数据平面使用的路由树拓扑结构,根据所述路由树拓扑结构将所述主机路由树和组播组路由树写入数据平面使用的路由树数据结构中;
    读取更新后的组播组路由树;及
    将更新后的组播组路由树写入数据平面使用的路由树数据结构中,完成一次路由更新。
  19. 根据权利要求15所述的计算机设备,其特征在于,所述处理器还用于执行:
    获取所述组播成员对应的报告报文,所述报告报文中携带了成员地址、源地址和过滤方式;及
    根据所述源地址和成员地址按照所述过滤方式对多个组播成员的地址进行过滤,得到需要所述组播报文的成员地址清单。
  20. 根据权利要求19所述计算机设备,其特征在于,所述过滤方式包括第一过滤方式和第二过滤方式,所述处理器还用于执行:
    获取与所述第一过滤方式对应的第一过滤清单,遍历所述第一过滤清单,判断所述第一过滤清单中的成员地址是否与源地址相同,若是,则将下 一级组播成员复制到第一临时清单;
    获取与所述第二过滤方式对应的第二过滤清单,遍历所述第二过滤清单,判断所述第二过滤清单中的成员地址是否与源地址相同,若是,则将下一级组播成员对应的宿主机从第二临时清单中删除;及
    将所述第一临时清单与第二临时清单合并,得到需要所述组播报文的成员地址清单。
  21. 根据权利要求15所述的计算机设备,其特征在于,所述处理器还用于执行:
    接收终端发送的组播组操作指令,操作指令中携带了租户标识、过滤方式、过滤地址集合和组播地址;及
    根据所述租户标识、过滤方式、过滤地址集合以及组播地址新建组播组或者删除组播组。
PCT/CN2017/092432 2016-07-13 2017-07-11 云端数据组播方法、系统和计算机设备 WO2018010626A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP17826962.7A EP3487131B1 (en) 2016-07-13 2017-07-11 Cloud-end data multicast method and system
US16/240,252 US10958723B2 (en) 2016-07-13 2019-01-04 Cloud-end data multicast method and system, and computer device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610552915.2 2016-07-13
CN201610552915.2A CN106209688B (zh) 2016-07-13 2016-07-13 云端数据组播方法、装置和系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/240,252 Continuation US10958723B2 (en) 2016-07-13 2019-01-04 Cloud-end data multicast method and system, and computer device

Publications (1)

Publication Number Publication Date
WO2018010626A1 true WO2018010626A1 (zh) 2018-01-18

Family

ID=57477330

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/092432 WO2018010626A1 (zh) 2016-07-13 2017-07-11 云端数据组播方法、系统和计算机设备

Country Status (4)

Country Link
US (1) US10958723B2 (zh)
EP (1) EP3487131B1 (zh)
CN (1) CN106209688B (zh)
WO (1) WO2018010626A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11095558B2 (en) 2018-12-28 2021-08-17 Alibaba Group Holding Limited ASIC for routing a packet

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106209688B (zh) 2016-07-13 2019-01-08 腾讯科技(深圳)有限公司 云端数据组播方法、装置和系统
CN113542128B (zh) 2018-10-12 2023-03-31 华为技术有限公司 一种发送路由信息的方法和装置
CN109756412B (zh) * 2018-12-24 2020-12-25 华为技术有限公司 一种数据报文转发方法以及设备
US11470041B2 (en) 2019-06-20 2022-10-11 Disney Enterprises, Inc. Software defined network orchestration to manage media flows for broadcast with public cloud networks
DE102019209342A1 (de) * 2019-06-27 2020-12-31 Siemens Mobility GmbH Verfahren und Übertragungsvorrichtung zur Datenübertragung zwischen zwei oder mehreren Netzwerken
CN110391978A (zh) * 2019-07-17 2019-10-29 国联证券股份有限公司 一种基于paas云平台的组播路由系统及方法
JP7576712B2 (ja) * 2020-10-23 2024-10-31 ヂェンヂョウ・シーネット・テクノロジーズ・カンパニー・リミテッド 識別子解析ルーティングに基づくマルチキャストシステム及びマルチキャスト方法
US11665094B2 (en) * 2020-11-30 2023-05-30 Vmware, Inc. Collecting, processing, and distributing telemetry data
CN114286127B (zh) * 2022-03-08 2022-05-27 浙江微能科技有限公司 一种分布式人工智能分析方法及装置
CN114828153B (zh) * 2022-04-22 2024-07-05 中科润物科技(南京)有限公司 基于组播的软件定义无人机自组网路由信息高效传送方法
US12107857B2 (en) * 2023-01-30 2024-10-01 Hewlett Packard Enterprise Development Lp Multicast traffic segmentation in an overlay network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110299537A1 (en) * 2010-06-04 2011-12-08 Nakul Pratap Saraiya Method and system of scaling a cloud computing network
CN103795636A (zh) * 2012-11-02 2014-05-14 华为技术有限公司 组播处理方法、装置及系统
CN104935443A (zh) * 2014-03-17 2015-09-23 中兴通讯股份有限公司 组播数据处理方法、装置、系统、发送设备及接收客户端
CN104980287A (zh) * 2014-04-04 2015-10-14 华为技术有限公司 组播组分配方法及组播管理节点
CN106209688A (zh) * 2016-07-13 2016-12-07 腾讯科技(深圳)有限公司 云端数据组播方法、装置和系统

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7707300B1 (en) * 2001-04-13 2010-04-27 Cisco Technology, Inc. Methods and apparatus for transmitting information in a network
US20100046516A1 (en) * 2007-06-26 2010-02-25 Media Patents, S.L. Methods and Devices for Managing Multicast Traffic
JP5935418B2 (ja) * 2012-03-15 2016-06-15 富士通株式会社 マルチキャストアドレスの管理のための情報処理装置、情報処理方法及びプログラム、中継装置、中継装置のための情報処理方法及びプログラム、並びに情報処理システム
US9686099B2 (en) * 2012-04-27 2017-06-20 Hewlett Packard Enterprise Development Lp Updating virtual network maps
JP2014007681A (ja) * 2012-06-27 2014-01-16 Hitachi Ltd ネットワークシステム、および、その管理装置、そのスイッチ
US8831000B2 (en) * 2012-10-10 2014-09-09 Telefonaktiebolaget L M Ericsson (Publ) IP multicast service join process for MPLS-based virtual private cloud networking
US9350558B2 (en) * 2013-01-09 2016-05-24 Dell Products L.P. Systems and methods for providing multicast routing in an overlay network
CN108632147B (zh) * 2013-06-29 2022-05-13 华为技术有限公司 报文组播的处理方法和设备
CN104954265B (zh) * 2014-03-25 2018-06-15 华为技术有限公司 发送组播报文的方法及交换机

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110299537A1 (en) * 2010-06-04 2011-12-08 Nakul Pratap Saraiya Method and system of scaling a cloud computing network
CN103795636A (zh) * 2012-11-02 2014-05-14 华为技术有限公司 组播处理方法、装置及系统
CN104935443A (zh) * 2014-03-17 2015-09-23 中兴通讯股份有限公司 组播数据处理方法、装置、系统、发送设备及接收客户端
CN104980287A (zh) * 2014-04-04 2015-10-14 华为技术有限公司 组播组分配方法及组播管理节点
CN106209688A (zh) * 2016-07-13 2016-12-07 腾讯科技(深圳)有限公司 云端数据组播方法、装置和系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3487131A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11095558B2 (en) 2018-12-28 2021-08-17 Alibaba Group Holding Limited ASIC for routing a packet

Also Published As

Publication number Publication date
US20190141124A1 (en) 2019-05-09
US10958723B2 (en) 2021-03-23
EP3487131A1 (en) 2019-05-22
CN106209688A (zh) 2016-12-07
EP3487131B1 (en) 2023-06-14
CN106209688B (zh) 2019-01-08
EP3487131A4 (en) 2019-06-05

Similar Documents

Publication Publication Date Title
WO2018010626A1 (zh) 云端数据组播方法、系统和计算机设备
US11398921B2 (en) SDN facilitated multicast in data center
US9698995B2 (en) Systems and methods for providing multicast routing in an overlay network
RU2595540C9 (ru) Базовые контроллеры для преобразования универсальных потоков
US9374270B2 (en) Multicast service in virtual networks
WO2019141111A1 (zh) 通信方法和通信装置
US10887119B2 (en) Multicasting within distributed control plane of a switch
EP2843906B1 (en) Method, apparatus, and system for data transmission
US9871721B2 (en) Multicasting a data message in a multi-site network
US7936702B2 (en) Interdomain bi-directional protocol independent multicast
US8131833B2 (en) Managing communication between nodes in a virtual network
CN106953848B (zh) 一种基于ForCES的软件定义网络实现方法
US8855118B2 (en) Source discovery for non-flooding multicast using openflow
US10379890B1 (en) Synchronized cache of an operational state of distributed software system
CN111010329B (zh) 一种报文传输方法及装置
WO2019062515A1 (zh) 一种组播转发方法及组播路由器
WO2024093064A1 (zh) 一种大规模多模态网络中标识管理及优化转发方法和装置
US10397340B2 (en) Multicast migration
US10608869B2 (en) Handling control-plane connectivity loss in virtualized computing environments
US8855015B2 (en) Techniques for generic pruning in a trill network
US8606890B2 (en) Managing communication between nodes in a virtual network
WO2017173989A1 (zh) 组播的分发处理方法、装置、系统及存储介质
CN116996585A (zh) 组播通信方法、装置、系统、计算机设备和存储介质
WO2024108493A1 (zh) 基于sdn与ndn的虚实结合动态流量调度方法及装置
CN113595912B (zh) 5GLAN中基于IPv6扩展报头的一对多通信方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17826962

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2017826962

Country of ref document: EP

Effective date: 20190213