CN114500171B - Network system and message transmission method - Google Patents

Network system and message transmission method Download PDF

Info

Publication number
CN114500171B
CN114500171B CN202111645930.9A CN202111645930A CN114500171B CN 114500171 B CN114500171 B CN 114500171B CN 202111645930 A CN202111645930 A CN 202111645930A CN 114500171 B CN114500171 B CN 114500171B
Authority
CN
China
Prior art keywords
message
bms
proxy
node
address
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111645930.9A
Other languages
Chinese (zh)
Other versions
CN114500171A (en
Inventor
杜鹏
孙会首
李明达
任超
陈镇
黄小山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shuguang Cloud Computing Group Co ltd
Original Assignee
Shuguang Cloud Computing Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shuguang Cloud Computing Group Co ltd filed Critical Shuguang Cloud Computing Group Co ltd
Priority to CN202111645930.9A priority Critical patent/CN114500171B/en
Publication of CN114500171A publication Critical patent/CN114500171A/en
Application granted granted Critical
Publication of CN114500171B publication Critical patent/CN114500171B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4641Virtual LANs, VLANs, e.g. virtual private networks [VPN]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4633Interconnection of networks using encapsulation techniques, e.g. tunneling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4641Virtual LANs, VLANs, e.g. virtual private networks [VPN]
    • H04L12/4645Details on frame tagging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/66Arrangements for connecting between networks having differing types of switching systems, e.g. gateways
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/10Packet switching elements characterised by the switching fabric construction

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application relates to a network system and a message transmission method. The system comprises a plurality of exchange units, wherein each exchange unit comprises a bare metal server BMS, a switch, at least one computing node deployed on a cloud platform, and proxy nodes of the at least one computing node, wherein the proxy nodes comprise the BMS, and the proxy nodes comprise proxy flow tables of the BMS; the switch is used for adding PVID corresponding to the BMS to the first message after receiving the first message sent by the BMS to obtain a second message, and broadcasting the second message; and the proxy node is used for forwarding the first message according to the PVID of the second message and the proxy flow table of the BMS after receiving the second message. By adopting the system, the stability of BMS service can be improved.

Description

Network system and message transmission method
Technical Field
The present disclosure relates to the field of communications technologies, and in particular, to a network system and a packet transmission method.
Background
With rapid development of science and technology, in some industries, users want to monopolize physical machines in the cloud to ensure high performance and stability of services, for example: big data, cloud databases, etc. BMS (Bare Metal Server ) is true physical server, adds true physical server to cloud platform's resource pool, can supply the user to apply for, dispose and use, can satisfy the user and monopolize the demand of physical machine in the high in the clouds. However, how to implement network interworking of the BMS and the cloud host is a problem to be solved.
In the related scheme, a hardware switch is adopted to encapsulate the traffic entering and exiting the BMS by using a VXLAN (Virtual Extensible Local Area Network, virtual expansion local area network), so that the BMS and other cloud hosts are mutually communicated.
However, the encapsulation and decapsulation operations are completed by the hardware switch by adopting the hardware network device, the hardware switch is seriously relied on, and the mutual communication between the BMS and other cloud hosts cannot be realized under the condition that the hardware switch device is unavailable due to faults.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a network system and a message transmission method that can reduce the dependency on a hardware switch.
In a first aspect, the present application provides a network system, where the system includes a plurality of switch groups, where the switch groups include a bare metal server BMS, a switch, at least one computing node deployed on a cloud platform, and a proxy node including the BMS in the at least one computing node, where the proxy node includes a proxy flow table of the BMS;
the switch is used for adding a virtual local area network identifier (PVID) corresponding to the BMS to the first message after receiving the first message sent by the BMS to obtain a second message, and broadcasting the second message;
And the proxy node is used for forwarding the first message according to the PVID of the second message and the proxy flow table of the BMS after receiving the second message.
Based on the network system provided by the embodiment of the application, the message forwarding can be performed through the proxy node deployed on the cloud platform, the message forwarding of the BMS is not dependent on the hardware switch any more, and the service stability of the BMS can be improved; in addition, in the embodiment of the application, the distributed gateway is realized through the proxy nodes of each BMS, the message does not need to pass through a unified network node, and the pressure of message forwarding is shared to each proxy node, so that the pressure of a single network node is reduced, the message forwarding path is optimized, and the message forwarding efficiency is improved.
In one embodiment, the interfaces of the switch are in one-to-one correspondence with the BMSs, and the interfaces of the switch are provided with PVIDs of the virtual local area network VLANs to which the corresponding BMSs belong.
According to the network system provided by the embodiment of the application, each BMS can monopolize one VLAN, and therefore the purpose of tenant isolation can be achieved.
In one embodiment, when the destination IP address of the first message and the IP address of the BMS are located in the same network segment, the proxy node is further configured to remove the PVID of the second message, obtain the first message, and forward the first message according to a proxy flow table of the BMS.
Based on the network system provided by the embodiment of the application, when the BMS interacts with the target device of the same network segment, the proxy node can remove the PVID of the second message and then forward the second message according to the proxy flow table of the BMS, so that the dependence on hardware devices is reduced, and the stability of BMS service can be improved.
In one embodiment, when the computing node where the target device corresponding to the destination IP address of the first packet is located is a proxy node of the BMS, the proxy node is further configured to send the first packet to the target device according to the destination MAC address in the first packet after the first packet is obtained.
Based on the network system provided by the embodiment of the application, when the BMS interacts with the target equipment of the same network segment and the same computing node, the proxy node can directly send the first message to the target equipment according to the MAC address of the target equipment after removing the PVID of the second message, so that the dependence on hardware equipment is reduced, and the stability of BMS service can be improved.
In one embodiment, when the computing node where the target device corresponding to the destination IP address of the first packet is located and the proxy node of the BMS are different computing nodes, the proxy node is further configured to determine, according to the PVID of the second packet, a tenant identifier VNI of the BMS, encapsulate the first packet according to the VNI and a proxy flow table of the BMS after the first packet is obtained, and send the encapsulated first packet to the computing node corresponding to the target device according to the destination IP address of the encapsulated first packet.
Based on the network system provided by the embodiment of the application, when the BMS interacts with the target equipment of the same network segment and different computing nodes, the proxy node can remove the PVID of the second message, and then the VXLAN encapsulation and forwarding are performed according to the proxy flow table of the BMS, so that the dependence on hardware equipment is reduced, and the stability of BMS service can be improved.
In one embodiment, when the destination IP address of the first message and the IP address of the BMS are located in different network segments, the proxy node is further configured to, after removing the PVID of the second message to obtain the first message, perform three-layer gateway conversion on the first message according to the proxy flow table of the BMS to obtain a third message, and forward the third message according to the proxy flow table of the BMS.
Based on the network system provided by the embodiment of the application, when the BMS performs message interaction with target devices of different network segments, after the proxy node removes PVID of the second message, three-layer gateway conversion is performed on the first message to obtain a third message, and the third message is forwarded according to the proxy flow table of the BMS, so that dependence on hardware devices is reduced, and stability of BMS service can be improved.
In one embodiment, when the computing node where the target device corresponding to the destination IP address of the first packet is located is a proxy node of the BMS, the proxy node is further configured to send the third packet to the target device according to the destination MAC address in the third packet after the third packet is obtained.
Based on the network system provided by the embodiment of the application, when the BMS interacts with the target equipment of different network segments and the computing node, after the proxy node removes the PVID of the second message, the proxy node forwards the first message after three-layer gateway conversion according to the proxy flow table of the BMS, so that the dependence on hardware equipment is reduced, and the stability of BMS service can be improved.
In one embodiment, when the computing node where the target device corresponding to the IP address of the first packet is located and the proxy node of the BMS are different computing nodes, the proxy node is further configured to determine the VNI of the BMS according to the PVID of the second packet, and after the third packet is obtained, encapsulate the third packet according to the VNI and the proxy flow table of the BMS, and then send the encapsulated third packet to the computing node of the target device according to the encapsulated target IP address of the third packet.
Based on the network system provided by the embodiment of the application, when the BMS performs message interaction with target devices of different network segments and different computing nodes, the proxy node can remove PVID of the second message, and after three-layer gateway conversion, VXLAN encapsulation and forwarding are performed according to the proxy flow table of the BMS, so that dependence on hardware devices is reduced, and the stability of BMS service can be improved.
In one embodiment, the proxy node is further configured to, after receiving a fourth packet sent to the BMS, add a PVID corresponding to the BMS to the fourth packet, obtain a fifth packet, and send the fifth packet to the switch;
and the switch is further used for sending the fourth message to the BMS according to the PVID corresponding to the BMS after receiving the fifth message.
Based on the network system provided by the embodiment of the application, the proxy node can add the PVID corresponding to the BMS in the message sent to the BMS, so that the switch can forward the message to the BMS based on the PVID, the dependence on hardware equipment is reduced, and the stability of BMS service can be improved.
In one embodiment, the system further comprises a management node, and the management node determines a new proxy node allocated to the BMS after monitoring the proxy node fault of the BMS, and migrates the BMS to the new proxy node.
Based on the network system provided by the embodiment of the application, when the proxy node of the BMS fails, the proxy node can be redistributed, communication between the BMS and the cloud host is not affected, and stability of BMS service can be guaranteed.
In a second aspect, the present application further provides a message transmission method, which is characterized in that the message transmission method is applied to a proxy node in a network system, where the system includes a plurality of switch units, the switch units include a bare metal server BMS, a switch, at least one computing node deployed on a cloud platform, a proxy node including the BMS in the at least one computing node, and a proxy flow table including the BMS in the proxy node;
the method comprises the following steps:
the switch receives a first message sent by the BMS, adds PVID corresponding to the BMS for the first message, and broadcasts a second message after the second message is obtained;
and the proxy node receives the second message and forwards the first message according to the PVID of the second message and the proxy flow table of the BMS.
The network system comprises a plurality of exchanger units, wherein each exchanger unit comprises a bare metal server BMS, a switch, at least one computing node arranged on a cloud platform, at least one proxy node comprising the BMS in the computing node, and a proxy flow table comprising the BMS in the proxy node; the switch is used for adding PVID corresponding to the BMS to the first message after receiving the first message sent by the BMS to obtain a second message, and broadcasting the second message; and the proxy node is used for forwarding the first message according to the PVID of the second message and the proxy flow table of the BMS after receiving the second message. Based on the network system and the message transmission method provided by the embodiment of the application, the message forwarding can be performed through the proxy node deployed on the cloud platform, the message forwarding of the BMS is not dependent on a hardware switch any more, and the service stability of the BMS can be improved; in addition, in the embodiment of the application, the distributed gateway is realized through the proxy nodes of each BMS, the message does not need to pass through a unified network node, and the pressure of message forwarding is shared to each proxy node, so that the pressure of a single network node is reduced, the message forwarding path is optimized, and the message forwarding efficiency is improved.
Drawings
FIG. 1 is a block diagram of a network system in one embodiment;
FIG. 2 is a schematic diagram of a network system in one embodiment;
FIG. 3 is a schematic diagram of a network system in one embodiment;
FIG. 4 is a block diagram of a network system in one embodiment;
FIG. 5 is a flow chart of a message transmission method in one embodiment;
fig. 6 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The network system provided in the embodiment of the present application may be shown in fig. 1. The network system may include a plurality of switch groups 10 (only 1 switch group is shown in fig. 1), where the switch groups include a bare metal server BMS 102, a switch 104, at least one computing node 106 (only 2 computing nodes are shown in fig. 1) deployed on a cloud platform, and a proxy node (the proxy node is computing node 1 in fig. 1) including the BMS in the at least one computing node 106, and a proxy flow table including the BMS in the proxy node;
The switch 104 is configured to add a virtual local area network identifier PVID corresponding to the BMS 102 to the first message after receiving the first message sent by the BMS 102, obtain a second message, and broadcast the second message;
the proxy node 106 is configured to forward the first message according to the PVID of the second message and the proxy flow table of the BMS 102 after receiving the second message.
In this embodiment, a virtual local area network identifier PVID (Port-base Vlan ID, port-based Vlan unique identifier) of the BMS 102 may be set in the switch 104 in advance. For example, the MAC address of the BMS 102 through which each interface of the switch can pass may be preset and stored in the database, and the PVID of the VLAN (Virtual Local Area Network ) corresponding to the BMS 102 may be set at the interface of the switch. When the BMS is registered, information such as the MAC address, PVID, and tenant identification VNI (Virtual Network Interface ) is stored in the database. When the information is used for issuing the proxy flow table to the proxy node of the BMS, the MAC address of the BMS is utilized to obtain the PVID and the VNI corresponding to the BMS.
When the exchanger receives the message, if the message carries VLAN identification and the VLAN identification carried in the message is the same as the PVID of the exchanger interface, the VLAN identification in the message can be removed and the message can be forwarded; when the switch receives the first message sent by the BMS 102, the first message does not carry the VLAN identifier, and after adding the PVID corresponding to the interface to the first message, a second message is obtained, and the second message is broadcasted.
Wherein, switch 104 can connect a plurality of BMSs, how many interfaces the switch can connect to how many BMSs. And, when a user registers a BMS, a switch group to which the BMS belongs may be designated, and the switch group is bound to some proxy nodes. In this way, PVIDs within this switch group can be multiplexed within other switch groups, and the number of BMSs can be extended indefinitely.
Illustratively, different switches have different configuration commands, one of which is shown in the following example:
system-view# enters the system configuration view;
interface 10GE1/0/10# enters the interface configuration view;
port link-type access# configures the interface type as an access port;
port default vlan10 # PVID is configured for this interface.
In this way, PVID can be configured for interface 10GE 1/0/10: VLAN10, through this interface, can put PVID on the packet that does not carry VLAN id: vlan10, or a packet carrying vlan10 through the interface, may remove vlan10 carried in the packet.
After receiving the second message, the proxy node 106 determines that the proxy node has the proxy flow table of the BMS102 locally according to the source MAC address of the second message, so that PVID in the second message can be removed, and after obtaining the first message, the first message is forwarded based on the proxy flow table of the BMS 102. The proxy flow table may be a flow table that indicates a packet forwarding path of a device/cloud host that communicates with the BMS102 in a proxy node that issues to the BMS 102. After the other proxy node receives the second message, the second message is discarded because the proxy flow table of BMS102 is not maintained locally.
That is, the first message sent from the BMS 102 is added with the PVID of the VLAN corresponding to the BMS by the switch 104 to obtain the second message. The switch 104 sends the second message to the proxy node 106 of the BMS 102, and after the proxy node 106 removes the PVID of the second message, the first message is obtained, and the next-hop device is determined according to the destination IP address and the destination MAC address of the first message and the proxy flow table of the BMS 102, and the first message is forwarded to the next-hop device.
The network system comprises a plurality of exchanger groups, wherein each exchanger group comprises a bare metal server BMS, a switch, at least one computing node arranged on a cloud platform, and proxy nodes comprising the BMS in the at least one computing node, wherein the proxy nodes comprise proxy flow tables of the BMS; the switch is used for adding PVID corresponding to the BMS to the first message after receiving the first message sent by the BMS to obtain a second message, and broadcasting the second message; and the proxy node is used for forwarding the first message according to the PVID of the second message and the proxy flow table of the BMS after receiving the second message. Based on the network system provided by the embodiment of the application, the message forwarding can be performed through the proxy node deployed on the cloud platform, the message forwarding of the BMS is not dependent on the hardware switch any more, and the service stability of the BMS can be improved; in addition, in the embodiment of the application, the distributed gateway is realized through the proxy nodes of each BMS, the message does not need to pass through a unified network node, and the pressure of message forwarding is shared to each proxy node, so that the pressure of a single network node is reduced, the message forwarding path is optimized, and the message forwarding efficiency is improved.
In one embodiment, the interfaces of the switch 104 are in one-to-one correspondence with the BMSs, and the interfaces of the switch are provided with PVIDs of the virtual local area network VLANs to which the corresponding BMSs belong.
In this embodiment of the present application, because the messages coming in and going out of the BMS do not have a VLAN, PVIDs corresponding to the BMS may be set at the switch interface connected to the BMS service network card, so that a MAC address of one BMS corresponds to one PVID, so as to distinguish the messages coming in and going out of different BMS. That is, in the embodiment of the present application, each BMS may monopolize one VLAN, so as to achieve the purpose of tenant isolation.
Illustratively, referring to fig. 2, the embodiments of the present application may solve the problem of MAC address collision. By setting different PVIDs on the switch, address collision problems (the same MAC address appears on different interfaces of the switch) can be resolved. For example: assuming that BMS1 (PVID: 11) and BMS2 (PVID: 12) are not at the same proxy node, BMS1 will be answered by its proxy node (e.g., computing node 1) when requesting its own gateway MAC address, i.e., the gateway address will be sent to BMS1 by the computing node 1 through its connected switch interface (e.g., 10GE 1/0/5); similarly, when the BMS2 in the same network segment as the BMS1 requests the gateway MAC address, the proxy node (e.g. the computing node 2) will answer the request, that is, the gateway address will be sent to the BMS2 by the switch interface (e.g. 10GE1/0/6, corresponding proxy node is not shown in fig. 2) to which the computing node 2 is connected. At this time, the switch learns the same gateway MAC address (aa 00-0000-1000) from interfaces 10GE1/0/5 and 10GE1/0/6, but address collision is not caused because VLANs of BMS1 and BMS2 are different.
The embodiment of the application can also solve the problem of address overlapping of different tenants, for example: in fig. 2, there are two BMSs with IP addresses of 192.168.0.1, and since the two BMSs correspond to different PVIDs, i.e. are in different vlan, i.e. tenant isolation is achieved by different PVIDs.
In one embodiment, in the case that the destination IP address of the first message and the IP address of the BMS102 are located in the same network segment, the proxy node 106 is further configured to remove the PVID of the second message, obtain the first message, and forward the first message according to the proxy flow table of the BMS.
In this embodiment of the present application, when the destination IP address of the first message and the IP address of the BMS102 (i.e., the source IP address of the first message) are located in the same network segment, the proxy node 106 may remove the PVID of the second message, obtain the first message, and forward the first message according to the proxy flow table of the BMS.
Illustratively, when BMS102 requests the MAC address of the cloud host of the same network segment through ARP (Address Resolution Protocol ), proxy node 106 may answer the ARP request due to VLAN isolation, and send an ARP reply (the ARP reply includes the real MAC address of the cloud host or the gateway of the cloud host, as shown in fig. 2) to BMS 102. After the BMS102 encapsulates and sends the first packet of the two layers according to the MAC address of the target device (or the gateway MAC address of the BMS 102), the switch 104 adds the PVID corresponding to the BMS102 to the first packet, so as to obtain and send the second packet to the proxy node 106. After the proxy node determines the next hop device according to the proxy flow table of the BMS102, removing the PVID in the second message to obtain a first message, and forwarding the first message to the next hop device.
For example, refer to fig. 3. Assuming that BMS 102 is bms_1, bms_1 sends a first message to the virtual machines of the same network segment. The source IP address in the first packet is: bms_1IP address, source MAC address is: bms_1MAC address, destination IP address is: the IP address of the target equipment, and the target MAC address is: destination device MAC address. The first message does not carry a VLAN ID.
After receiving the first message, the BMS switch 01 marks the PVID for the first message: vlan 30 and arrives at Leaf01 switch through trunk 30-34 port with vlan 30-34 put through, at this point because the second message carries PVID: vlan 30. Thus, the Leaf01 switch broadcasts a second message to all interfaces that have vlan 30 ported. That is, both compute node 1 and compute node 2 will receive the corresponding message, but since only the proxy node (compute node 01) of bms_1 has the proxy flow table to process the vlan 30, compute node 1 will process the second message and compute node 2 will discard the second message.
The OVS bridge (br-int) on the proxy node of bms_1 determines a next hop device of the first packet according to the proxy flow table, and after removing the PVID of the second packet, forwards the first packet to the next hop device.
Based on the network system provided by the embodiment of the application, when the BMS interacts with the target device of the same network segment, the proxy node can remove the PVID of the second message and then forward the second message according to the proxy flow table of the BMS, so that the dependence on hardware devices is reduced, and the stability of BMS service can be improved.
In one embodiment, in the case that the computing node of the target device corresponding to the destination IP address of the first packet is a proxy node of the BMS 102, the proxy node 106 is further configured to send the first packet to the target device according to the destination MAC address in the first packet after obtaining the first packet.
In this embodiment, in the case where the destination IP address of the first packet and the IP address of the BMS 102 are located in the same network segment, and the computing node where the destination device corresponding to the destination IP address of the first packet is a proxy node of the BMS 102 (for example, in the case where the destination device is a virtual machine VM, the computing node where the VM is located is determined to be the computing node where the destination device is located, in this embodiment, the VM is located in the proxy node of the BMS, or in the case where the destination device is the BMS, the computing node serving as the proxy node of the destination device is determined to be the computing node where the destination device is located, in this embodiment, the proxy node 106 may directly send the first packet to the destination device according to the destination MAC address in the first packet after obtaining the first packet.
For example, refer to fig. 3. Assuming that BMS 102 is bms_1, bms_1 sends a first message to virtual machine VM1 of the same network segment and the same compute node. The source IP address in the first packet is: bms_1IP address, source MAC address is: bms_1MAC address, destination IP address is: VM 1IP address, destination MAC address is: VM 1MAC address.
After the computing node 01 obtains the first packet (the first packet is forwarded to the computing node 01 through the BMS switch 01, and the specific process may refer to the related description of the foregoing embodiment, which is not described herein in detail), the computing node may be according to the target MAC address: and the VM 1MAC address directly forwards the first message to the network card device sent to the VM 1.
Based on the network system provided by the embodiment of the application, when the BMS interacts with the target equipment of the same network segment and the same computing node, the proxy node can directly send the first message to the target equipment according to the MAC address of the target equipment after removing the PVID of the second message, so that the dependence on hardware equipment is reduced, and the stability of BMS service can be improved.
In one embodiment, when the computing node where the target device corresponding to the destination IP address of the first message is located and the proxy node of the BMS 102 are different nodes, the proxy node 106 is further configured to determine the VNI of the BMS according to the PVID of the second message, encapsulate the first message according to the VNI and the proxy flow table of the BMS after obtaining the first message, and then send the encapsulated first message to the computing node corresponding to the target device according to the destination IP address of the encapsulated first message.
In this embodiment, when the destination IP address of the first message and the IP address of the BMS 102 are located in the same network segment, but the computing node where the destination IP address of the first message corresponds to the computing node where the proxy node of the BMS 102 is a different node (e.g., when the destination device is a virtual machine VM, the computing node where the VM is located is determined to be the computing node where the destination device is located, in this embodiment, the computing node where the VM is located is not the same node as the computing node of the proxy node of the BMS), or when the destination device is the BMS, the computing node which is the proxy node of the destination device is determined to be the computing node where the destination device is located, in this embodiment, the proxy node 106 may determine the VNI corresponding to the BMS 102 according to the PVID of the second message, and further encapsulate the first message according to the VNI and the proxy flow table of the BMS by using VXLAN (Virtual Extensible Local Area Network, virtual expansion local area network), and send the encapsulated first message to the computing node corresponding to the destination device of the destination device.
For example, refer to fig. 3. Assuming that BMS 102 is bms_1, bms_1 sends a first message to virtual machine VM4 of the same network segment and a different computing node. The source IP address in the first packet is: bms_1IP address, source MAC address is: bms_1MAC address, destination IP address is: VM4 IP address, destination MAC address is: MAC address of VM 4.
Because the virtual machine VM4 belongs to the computing node 02, the computing node 1 determines that the packet is forwarded through the OVS tunnel according to the proxy flow table, and then the computing node 01 can determine, through the PVID of the second packet, a VNI corresponding to the bms_1, and can perform VXLAN encapsulation on the first packet based on the VNI, where the encapsulated first packet has an outer layer source IP address of: the IP address of the VTEP equipment of the node 01 is calculated, and the outer source MAC address is: the MAC address of the VTEP equipment of the node 01 is calculated, and the outer layer destination IP address is: the IP address of the VTEP equipment of the node 02 is calculated, and the outer layer target MAC address is: the MAC address of the VTEP device of node 02 is calculated.
The computing node 01 transmits in an IP network according to the outer layer MAC address and the IP address of the packaged first message until reaching the VTEP equipment of the computing node 02, and the VTEP equipment of the computing node 02 unpacks the packaged first message to obtain the first message and according to the destination MAC address of the first message: the MAC address of VM4 sends a first message to VM 4.
Based on the network system provided by the embodiment of the application, when the BMS interacts with the target equipment of the same network segment and different computing nodes, the proxy node can remove the PVID of the second message, and then the VXLAN encapsulation and forwarding are performed according to the proxy flow table of the BMS, so that the dependence on hardware equipment is reduced, and the stability of BMS service can be improved.
In one embodiment, when the destination IP address of the first message and the IP address of the BMS 102 are located in different network segments, the proxy node 106 is further configured to, after removing the PVID of the second message to obtain the first message, perform three-layer gateway conversion on the first message according to the proxy flow table of the BMS to obtain a third message, and forward the third message according to the proxy flow table of the BMS.
In this embodiment of the present application, when the destination IP address of the first packet and the IP address of the BMS 102 (i.e., the source IP address of the first packet) are located in different network segments, the proxy node 106 may remove the PVID of the second packet, after obtaining the first packet, perform three-layer gateway conversion on the first packet according to the proxy flow table of the BMS, convert the source MAC address into the MAC address of the gateway of the target device, convert the destination MAC address into the MAC address of the target device, and forward the third packet after obtaining the third packet.
For example, refer to fig. 3. Assuming that BMS 102 is bms_1, bms_1 sends a first message to virtual machines of different network segments. The source IP address in the first packet is: bms_1IP address, source MAC address is: bms_1MAC address, destination IP address is: the IP address of the target equipment, and the target MAC address is: gateway MAC address of bms_1.
After obtaining the first message, the computing node 01 performs three-layer gateway conversion on the first message according to the proxy flow table of the BMS_1 to obtain a third message, wherein the source IP address of the third message is as follows: bms_1IP address, source MAC address is: the gateway MAC address of the target device, the destination IP address is: the IP address of the target equipment, and the target MAC address is: the MAC address of the target device. The computing node 1 may forward the third message according to the proxy flow table of bms_1.
Based on the network system provided by the embodiment of the application, when the BMS performs message interaction with target devices of different network segments, after the proxy node removes PVID of the second message, three-layer gateway conversion is performed on the first message to obtain a third message, and the third message is forwarded according to the proxy flow table of the BMS, so that dependence on hardware devices is reduced, and stability of BMS service can be improved.
In one embodiment, in the case that the computing node where the target device corresponding to the destination IP address of the first packet is located is a proxy node of the BMS102, the proxy node 106 is further configured to send, after obtaining the third packet, the third packet to the target device according to the target MAC address in the third packet.
In this embodiment, when the destination IP address of the first packet and the IP address of the BMS102 are located in different network segments, but the computing node where the destination device corresponding to the destination IP address of the first packet is a proxy node of the BMS102 (for example, when the destination device is a virtual machine VM, the computing node where the VM is located is determined to be the computing node where the destination device is located, in this embodiment, the VM is located in the proxy node of the BMS, or when the destination device is the BMS, the computing node which is the proxy node of the destination device is determined to be the computing node where the destination device is located, in this embodiment, the proxy node 106 may perform three-layer gateway conversion on the first packet according to the proxy flow table of the BMS, convert the source MAC address to the MAC address of the gateway of the destination device, and convert the destination MAC address to the MAC address of the destination device, and then send the third packet to the destination device according to the destination MAC address in the third packet.
For example, refer to fig. 3. Assuming that BMS 102 is bms_1, bms_1 sends a first message to virtual machine VM2 of a different network segment but the same compute node. The source IP address in the first packet is: bms_1IP address, source MAC address is: bms_1MAC address, destination IP address is: VM2 IP address, destination MAC address: gateway MAC address of bms_1.
After the computing node 01 obtains the first packet (the first packet is forwarded to the computing node 1 through the BMS switch 01, the specific process is described in reference to the foregoing embodiments, which are not repeated herein), the third packet is obtained after performing three-layer gateway conversion on the first packet according to the proxy flow table, where the source IP address of the third packet is: bms_1IP address, source MAC address is: the gateway MAC address of VM2, the destination IP address is: VM2 IP address, destination MAC address: MAC address of VM 2.
After the third packet is obtained by conversion, the computing node 01 may calculate, according to the destination MAC address: and the MAC address of the VM2 sends the third message to the network card device of the VM 2.
Based on the network system provided by the embodiment of the application, when the BMS interacts with the target equipment of different network segments and the computing node, after the proxy node removes the PVID of the second message, the proxy node forwards the first message after three-layer gateway conversion according to the proxy flow table of the BMS, so that the dependence on hardware equipment is reduced, and the stability of BMS service can be improved.
In one embodiment, when the computing node where the target device corresponding to the IP address of the first packet is located and the proxy node of the BMS 102 are different computing nodes, the proxy node 106 is further configured to determine, according to the PVID of the second packet, a tenant identifier VNI of the BMS, and after obtaining the third packet, encapsulate the third packet according to the VNI and a proxy flow table of the BMS, and then send the encapsulated third packet to the computing node of the target device according to the target IP address of the encapsulated third packet.
In this embodiment, when the destination IP address of the first message and the IP address of the BMS 102 are located in different network segments, and the computing node of the target device corresponding to the destination IP address of the first message and the proxy node of the BMS 102 are different computing nodes (e.g., when the target device is a virtual machine VM, the computing node of the VM is determined to be the computing node where the target device is located, in this embodiment, the computing node where the VM is located is not the same node as the computing node of the proxy node of the BMS), or when the target device is the BMS, the computing node of the proxy node of the target device is determined to be the computing node where the target device is located, in this embodiment, the proxy node 106 may determine the VNI corresponding to the BMS 102 according to the PVID of the second message, and further encapsulate the third message according to the VNI and the proxy flow table of the BMS after obtaining the third message, and send the encapsulated third message to the computing node of the target device through the outer layer IP of the encapsulated third message, for example, after encapsulating the third message: and when the target device is a virtual machine, the encapsulated third message may be sent to a computing node where the virtual machine is located, or when the target device is a BMS, the encapsulated third message may be sent to an agent node of the BMS.
For example, refer to fig. 3. Assuming that BMS 102 is bms_1, bms_1 sends a first message to virtual machine VM5 of a different network segment and of a different computing node. After the computing node 01 obtains the third message (the first message is forwarded to the computing node 01 through the BMS switch 01, the third message is obtained after the first message is subjected to three-layer gateway conversion, the specific process is described with reference to the foregoing embodiment, and the embodiment of the application is not repeated here), because VM5 belongs to the computing node 02, the computing node 01 determines that the message is forwarded through the OVS tunnel according to the proxy flow table, and then the computing node 01 can determine the VNI corresponding to the bms_1 through the PVID of the second message, and can perform VXLAN encapsulation on the first message based on the VNI, where the source IP address of the outer layer of the encapsulated first message is: the IP address of the VTEP equipment of the node 01 is calculated, and the outer source MAC address is: the MAC address of the VTEP equipment of the node 01 is calculated, and the outer layer destination IP address is: the IP address of the VTEP equipment of the node 02 is calculated, and the outer layer destination MAC address is: the MAC address of the VTEP device of node 02 is calculated.
The computing node 01 transmits in the IP network according to the outer layer MAC address and the IP address of the encapsulated third message until reaching the VTEP equipment of the computing node 02, and the VTEP equipment of the computing node 02 decapsulates the encapsulated third message to obtain the third message and according to the destination MAC address of the third message: and sending a third message to the VM5 by the MAC address of the VM 5.
Based on the network system provided by the embodiment of the application, when the BMS performs message interaction with target devices of different network segments and different computing nodes, the proxy node can remove PVID of the second message, and after three-layer gateway conversion, VXLAN encapsulation and forwarding are performed according to the proxy flow table of the BMS, so that dependence on hardware devices is reduced, and the stability of BMS service can be improved.
In this embodiment of the present application, if the message sent by the BMS102 to another BMS is converted according to the three-layer gateway, the tag of the other BMS added to the third message obtained after conversion may be directly sent back to the switch. For example, if the two BMSs in communication are located at the same proxy node, the proxy node adds PVID corresponding to the BMS corresponding to the destination MAC address to the converted third message, and sends the third message back to the switch; if the two BMSs are at different computing nodes, the proxy node of the source BMS transmitting the message transmits the converted third message to the proxy node of the destination BMS receiving the message through the tunnel, and the proxy node transmits the third message back to the switch after the PVID of the destination BMS is marked in the third message.
In one embodiment, the proxy node 106 is further configured to, after receiving the fourth message sent to the BMS102, add the PVID corresponding to the BMS102 to the fourth message, obtain a fifth message, and send the fifth message to the switch 104;
The switch 104 is further configured to send a fourth message to the BMS according to the PVID corresponding to the BMS 102 after receiving the fifth message.
In this embodiment, after receiving the fourth message sent to the BMS 102, the proxy node 106 may determine a corresponding PVID according to the destination MAC address of the fourth message, and add the PVID to the fourth message, obtain a fifth message, and send the fifth message to the switch 104. After the switch 104 receives the fifth message, determines an interface corresponding to the PVID, and after the interface removes the PVID, obtains a fourth message, and sends the fourth message to the BMS 102 to complete message forwarding.
Based on the network system provided by the embodiment of the application, the proxy node can add the PVID corresponding to the BMS in the message sent to the BMS, so that the switch can forward the message to the BMS based on the PVID, the dependence on hardware equipment is reduced, and the stability of BMS service can be improved.
In one embodiment, the network system further includes a management node that, upon monitoring for a proxy node failure of BMS 102, determines a new proxy node assigned to BMS 102, and migrates BMS 102 to the new proxy node.
In this embodiment, referring to fig. 4, the network node may further include a management node 108, where the management node 108 is configured to manage proxy nodes of the BMS. The management node 108 may be configured to monitor proxy nodes of the BMS, and after detecting a failure of a proxy node, may invoke a migration interface of the BMS to trigger proxy node migration of the BMS. For example, a new proxy node may be determined for the BMS from the computing nodes, and proxy node information for the BMS may be updated, for example: and updating the names of the proxy nodes of the BMS in the database to the names of the new proxy nodes. And after inserting the bridge device of the BMS into the OVS (OpenVSwitch) of the new proxy node, issuing a proxy flow table of the BMS to the new proxy node, and deleting the bridge device of the BMS and the proxy flow table in the failed proxy node.
It should be noted that, in the embodiment of the present application, the management node 108 may be disposed in any BMS to centrally manage agent nodes of a plurality of BMSs, or may be disposed in a distributed manner, for example: the manner in which the management node 108 is disposed in each computing node is not particularly limited in the embodiments of the present application.
Based on the network system provided by the embodiment of the application, when the proxy node of the BMS fails, the proxy node can be redistributed, communication between the BMS and the cloud host is not affected, and stability of BMS service can be guaranteed.
The network system provided by the embodiment of the application solves the problem that the BMS in the cloud platform is communicated with other devices by using OVS software tunnels (such as VXLAN and Gene tunnel networks) under the condition of not depending on hardware switch equipment. Each BMS monopolizes one VLAN, and one BMS (one service network card, one MAC address) belongs to one VNI, that is, one VLAN corresponds to one VNI (MAC: VLAN: vni=n: 1), and packets tunneled are encapsulated and decapsulated according to the correspondence. Each BMS has one proxy node, which can process messages transmitted by a plurality of BMSs. The proxy node is a common computing node of the cloud platform, and forwards all messages from the BMS by the proxy, otherwise, all messages sent to the BMS are sent to the proxy node of the BMS and are forwarded by the proxy node. The distributed tunnel network gateway is realized through the proxy node, and the north-south traffic of the BMS or the east-west traffic crossing three layers are directly in and out from the proxy node instead of the centralized gateway. And the proxy node processes both normal cloud host traffic and traffic of the BMS.
The message transmission method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein the BMS 102 communicates with a cloud host (or may also be referred to as a virtual machine VM) over a network. In one embodiment, as shown in fig. 5, a method for transmitting a message is provided, which is illustrated by taking the application of the method to the network system in fig. 1 as an example, and includes the following steps:
step 502, the switch receives a first message sent by the BMS, adds PVID corresponding to the BMS to the first message, and broadcasts a second message after obtaining the second message;
in step 504, the proxy node receives the second message, and forwards the first message according to the PVID of the second message and the proxy flow table of the BMS.
In this embodiment of the present application, the forwarding and transmitting process of the packet may refer to the related description of the foregoing embodiment, which is not repeated herein.
Based on the message transmission method provided by the embodiment of the application, the message forwarding can be performed through the proxy node deployed on the cloud platform, the message forwarding of the BMS is not dependent on the hardware switch any more, and the service stability of the BMS can be improved; in addition, in the embodiment of the application, the distributed gateway is realized through the proxy nodes of each BMS, the message does not need to pass through a unified network node, and the pressure of message forwarding is shared to each proxy node, so that the pressure of a single network node is reduced, the message forwarding path is optimized, and the message forwarding efficiency is improved.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 6. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing data related to the message transmission method. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a message transmission method.
It will be appreciated by those skilled in the art that the structure shown in fig. 6 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (11)

1. A network system, wherein the system comprises a plurality of exchange units, the exchange units comprise a Bare Metal Server (BMS), a switch, at least one computing node deployed on a cloud platform, and proxy nodes comprising the BMS in the at least one computing node, wherein the proxy nodes comprise proxy flow tables of the BMS;
the switch is used for adding a virtual local area network identifier (PVID) corresponding to the BMS to the first message after receiving the first message sent by the BMS to obtain a second message, and broadcasting the second message;
And the proxy node is used for forwarding the first message according to the PVID of the second message and the proxy flow table of the BMS after receiving the second message.
2. The system according to claim 1, wherein the interfaces of the switches are in one-to-one correspondence with the BMSs, and the interfaces of the switches are provided with PVIDs of the virtual local area network VLANs to which the corresponding BMSs belong.
3. The system according to claim 1 or 2, wherein, in case that the destination IP address of the first message and the IP address of the BMS are located in the same network segment, the proxy node is further configured to remove the PVID of the second message, obtain the first message, and forward the first message according to a proxy flow table of the BMS.
4. The system of claim 3, wherein in the case that the computing node where the target device corresponding to the destination IP address of the first packet is located is a proxy node of the BMS, the proxy node is further configured to send the first packet to the target device according to the destination MAC address in the first packet after the first packet is obtained.
5. The system of claim 3, wherein, in a case that a computing node where a target device corresponding to a destination IP address of the first message is located is different from a proxy node of the BMS, the proxy node is further configured to determine a tenant identifier VNI of the BMS according to a PVID of the second message, and after the first message is obtained, encapsulate the first message according to the VNI and a proxy flow table of the BMS, and then send the encapsulated first message to the computing node corresponding to the target device according to the destination IP address of the encapsulated first message.
6. The system according to claim 1 or 2, wherein, in the case that the destination IP address of the first message and the IP address of the BMS are located in different network segments, the proxy node is further configured to, after removing the PVID of the second message to obtain the first message, perform three-layer gateway conversion on the first message according to the proxy flow table of the BMS to obtain a third message, and forward the third message according to the proxy flow table of the BMS.
7. The system of claim 6, wherein in the case that the computing node where the target device corresponding to the destination IP address of the first message is a proxy node of the BMS, the proxy node is further configured to send the third message to the target device according to the destination MAC address in the third message after obtaining the third message.
8. The system of claim 6, wherein when the computing node where the target device corresponding to the IP address of the first message is located is different from the proxy node of the BMS, the proxy node is further configured to determine the VNI of the BMS according to the PVID of the second message, and after obtaining the third message, encapsulate the third message according to the VNI and a proxy flow table of the BMS, and then send the encapsulated third message to the computing node of the target device according to the encapsulated target IP address of the third message.
9. The system according to claim 1 or 2, wherein the proxy node is further configured to, after receiving a fourth message sent to the BMS, add a PVID corresponding to the BMS to the fourth message, and send a fifth message to the switch after obtaining the fifth message;
and the switch is further used for sending the fourth message to the BMS according to the PVID corresponding to the BMS after receiving the fifth message.
10. The system of claim 1 or 2, further comprising a management node that, upon monitoring for a proxy node failure of the BMS, determines a new proxy node assigned to the BMS, and migrates the BMS to the new proxy node.
11. The message transmission method is characterized by being applied to proxy nodes in a network system, wherein the system comprises a plurality of exchange units, each exchange unit comprises a bare metal server BMS, a switch, at least one computing node deployed on a cloud platform, the proxy nodes comprising the BMS in the at least one computing node, and a proxy flow table comprising the BMS in the proxy nodes;
the method comprises the following steps:
the switch receives a first message sent by the BMS, adds PVID corresponding to the BMS for the first message, and broadcasts a second message after the second message is obtained;
And the proxy node receives the second message and forwards the first message according to the PVID of the second message and the proxy flow table of the BMS.
CN202111645930.9A 2021-12-29 2021-12-29 Network system and message transmission method Active CN114500171B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111645930.9A CN114500171B (en) 2021-12-29 2021-12-29 Network system and message transmission method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111645930.9A CN114500171B (en) 2021-12-29 2021-12-29 Network system and message transmission method

Publications (2)

Publication Number Publication Date
CN114500171A CN114500171A (en) 2022-05-13
CN114500171B true CN114500171B (en) 2023-05-26

Family

ID=81507999

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111645930.9A Active CN114500171B (en) 2021-12-29 2021-12-29 Network system and message transmission method

Country Status (1)

Country Link
CN (1) CN114500171B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109495405A (en) * 2018-12-12 2019-03-19 平安科技(深圳)有限公司 A kind of method and interchanger of bare metal server and cloud mainframe network intercommunication
CN110213148A (en) * 2019-05-22 2019-09-06 腾讯科技(深圳)有限公司 A kind of method, system and device of data transmission
CN111585917A (en) * 2020-06-10 2020-08-25 广州市品高软件股份有限公司 Bare metal server network system and implementation method thereof
WO2021103744A1 (en) * 2019-11-25 2021-06-03 中兴通讯股份有限公司 Heterogeneous network communication method and system, and controller

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11005682B2 (en) * 2015-10-06 2021-05-11 Cisco Technology, Inc. Policy-driven switch overlay bypass in a hybrid cloud network environment
US11150963B2 (en) * 2019-02-28 2021-10-19 Cisco Technology, Inc. Remote smart NIC-based service acceleration

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109495405A (en) * 2018-12-12 2019-03-19 平安科技(深圳)有限公司 A kind of method and interchanger of bare metal server and cloud mainframe network intercommunication
CN110213148A (en) * 2019-05-22 2019-09-06 腾讯科技(深圳)有限公司 A kind of method, system and device of data transmission
WO2021103744A1 (en) * 2019-11-25 2021-06-03 中兴通讯股份有限公司 Heterogeneous network communication method and system, and controller
CN111585917A (en) * 2020-06-10 2020-08-25 广州市品高软件股份有限公司 Bare metal server network system and implementation method thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SDN试验床网络虚拟化切片机制综述;刘江;黄韬;张晨;张歌;;通信学报(04);第56-58页+第84页 *

Also Published As

Publication number Publication date
CN114500171A (en) 2022-05-13

Similar Documents

Publication Publication Date Title
CN109218178B (en) Message processing method and network equipment
US9397943B2 (en) Configuring virtual media access control addresses for virtual machines
CN103200069B (en) A kind of method and apparatus of Message processing
CN103795636B (en) Multicast processing method, device and system
CN105827495B (en) The message forwarding method and equipment of VXLAN gateway
CN104243269A (en) Processing method and device of messages in VxLAN (virtual extensible local area network)
CN109617995B (en) Management system and method for VPC (virtual private network) internal container of tenant cluster and electronic equipment
JP2019033534A (en) Data packet transfer
CN110460684B (en) Broadcast domain isolation method and device for VXLAN (virtual extensible local area network) in same network segment
CN104170331A (en) L3 gateway for VXLAN
CN110213148B (en) Data transmission method, system and device
JP2016503247A (en) Packet transfer method and apparatus, and data center network
CN107659484B (en) Method, device and system for accessing VXLAN network from VLAN network
JP2019521619A (en) Packet forwarding
WO2013029440A1 (en) Method and apparatus for implementing layer-2 interconnection of data centers
CN106209648A (en) Multicast data packet forwarding method and apparatus across virtual expansible LAN
CN110474829B (en) Method and device for transmitting message
CN107332772B (en) Forwarding table item establishing method and device
CN106209636A (en) From the multicast data packet forwarding method and apparatus of VLAN to VXLAN
CN108964940A (en) Message method and device, storage medium
CN113923092A (en) Processing method and controller for appointed forwarder and provider edge device
EP3979709A1 (en) Dynamic multi-destination traffic management in a distributed tunnel endpoint
CN104253698A (en) Message multicast processing method and message multicast processing equipment
JP7314219B2 (en) DATA TRANSMISSION METHOD, APPARATUS AND NETWORK DEVICE
CN111404797B (en) Control method, SDN controller, SDN access point, SDN gateway and CE

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 100089 5th floor, building 36, yard 8, Dongbeiwang West Road, Haidian District, Beijing

Patentee after: Shuguang Cloud Computing Group Co.,Ltd.

Country or region after: China

Address before: 100089 5th floor, building 36, yard 8, Dongbeiwang West Road, Haidian District, Beijing

Patentee before: Shuguang Cloud Computing Group Co.,Ltd.

Country or region before: China

CP03 Change of name, title or address