CN116170406A - System and method for implementing virtual machine to public network communication - Google Patents

System and method for implementing virtual machine to public network communication Download PDF

Info

Publication number
CN116170406A
CN116170406A CN202310120457.5A CN202310120457A CN116170406A CN 116170406 A CN116170406 A CN 116170406A CN 202310120457 A CN202310120457 A CN 202310120457A CN 116170406 A CN116170406 A CN 116170406A
Authority
CN
China
Prior art keywords
public network
message
virtual machine
network
address
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310120457.5A
Other languages
Chinese (zh)
Inventor
徐志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Cloud Computing Ltd
Original Assignee
Alibaba Cloud Computing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Cloud Computing Ltd filed Critical Alibaba Cloud Computing Ltd
Priority to CN202310120457.5A priority Critical patent/CN116170406A/en
Publication of CN116170406A publication Critical patent/CN116170406A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/09Mapping addresses
    • H04L61/25Mapping addresses of the same type
    • H04L61/2503Translation of Internet protocol [IP] addresses
    • H04L61/2592Translation of Internet protocol [IP] addresses using tunnelling or encapsulation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4641Virtual LANs, VLANs, e.g. virtual private networks [VPN]
    • H04L12/4645Details on frame tagging
    • H04L12/4666Operational details on the addition or the stripping of a tag in a frame, e.g. at a provider edge node
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3009Header conversion, routing tables or routing tags
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/35Switches specially adapted for specific applications
    • H04L49/354Switches specially adapted for specific applications for supporting virtual local area networks [VLAN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The present disclosure relates to a system and method for enabling a virtual machine to communicate with a public network. The system comprises: the public network gateway is used for realizing interconnection between a private network and a public network to which the virtual machine belongs; and the virtual switch is used for forwarding the message between the virtual machine and the public network gateway. Wherein the virtual switch is configured to perform a network address translation process on messages transmitted between the virtual machine and the public network gateway, the network address translation process including a translation process between a private network IP address of the virtual machine and a public network IP address of the virtual machine. Thus, the present disclosure can better support large-scale virtual machine to public network communication by performing network address translation processing using a virtual switch.

Description

System and method for implementing virtual machine to public network communication
Technical Field
The present disclosure relates to the field of virtual machine communications, and in particular, to a system for implementing communication between a virtual machine and a public network.
Background
The cloud server is a virtual server service provided by a cloud facilitator on the cloud for a user. A cloud server, such as an ECS (Elastic Compute Service, elastic computing service), may implement the elastic extension IaaS (Infrastructure as a Service), eliminating the need for users to purchase IT hardware and enabling the ready-to-use and elastic scaling of computing resources.
Generally, a cloud service provider may run a plurality of VMs (Virtual machines) on a physical server, where each VM has an independent IP address, but generally a private network IP address, and cannot be used for communication with a public network (Internet), so NAT (Network Address Translation ) processing is required on an IGW (Internet Gateway) for public network communication, that is, conversion processing is performed between the private network IP address of the VM and a corresponding public network IP address, so that communication between the Virtual Machine and the public network is achieved.
However, with the development of cloud server technology, an IGW typically supports public network communication with an excessively large number of VMs, which means that an IGW typically stores an excessively large number of SNAT/DNAT entries for VMIP, where SNAT (Source Network Address Translation) refers to source network address translation and DNAT (Destination Network Address Translation) refers to destination network address translation.
Excessively large-scale entries may affect IGW performance.
Particularly, with development of cloud network technology, requirements on processing capacity of IGW forwarding are higher and higher, the processing capacity of software is limited by a CPU, and processing requirements of messages cannot be met more and more, particularly in a scene of mass flow and single-flow large bandwidth. For this purpose, it is proposed to use a hardware gateway as IGW. The hardware gateway can generally offload (offfload) the analysis processing of the message to hardware, fully utilizes the high performance of the hardware, and achieves the acceleration of the network and the processing of large-scale flow. However, in the case where the message parsing processing portion of the IGW is implemented by hardware, the number of the snap/DNAT entries is limited by hardware resources.
Accordingly, it is desirable to improve the system by which VMs communicate with the public network to better support large-scale VMs.
Disclosure of Invention
One technical problem to be solved by the present disclosure is to provide a system for implementing Virtual Machine (VM) communication with a public network, which can better support large-scale VM communication with the public network.
According to a first aspect of the present disclosure, there is provided a system for enabling a virtual machine to communicate with a public network, comprising: the public network gateway is used for realizing interconnection between a private network and a public network to which the virtual machine belongs; the virtual switch is used for forwarding messages between the virtual machine and the public network gateway; wherein the virtual switch is configured to perform a network address translation process on messages transmitted between the virtual machine and the public network gateway, the network address translation process including a translation process between a private network IP address of the virtual machine and a public network IP address of the virtual machine.
Optionally, the network address conversion process includes a source network address conversion process and a destination network address conversion process, wherein the source network address conversion process converts source network address information in the message from a private network IP address of the virtual machine to the public network IP address, and wherein the destination network address conversion process converts destination IP address information in the message from the public network IP address of the virtual machine to the private network IP address. The virtual switch is configured to: and under the condition that the message received from the virtual machine is confirmed to be the message flowing to the public network, executing the source network address conversion processing and then forwarding to the public network gateway, and under the condition that the message received from the public network gateway is confirmed to be the message flowing to the virtual machine from the public network, executing the destination network address conversion processing and then forwarding to the virtual machine.
Optionally, a flag field is set in a packet forwarded between the virtual switch and the public network gateway, where the flag field is set to a value indicating that the virtual switch performs network address translation, the public network gateway skips network address translation when resolving the flag field, and the virtual switch performs network address translation when resolving the flag field.
Optionally, the virtual switch communicates with the public network gateway through a tunneling protocol, and the flag field is set in a header added to the message according to the tunneling protocol.
Optionally, the tunneling protocol is VXLAN (virtual extensible local area network) protocol, and the tag field is at least a portion of a reserved field in the VXLAN header.
Optionally, the virtual switch is configured to confirm a type of a public network IP address of the virtual machine before performing the network address translation process, and to perform the network address translation process only if the type of the public network IP address is a specified type.
Optionally, the specified type includes a type that only supports one virtual machine by a public network IP address.
Optionally, the public network gateway includes hardware for parsing the message; and/or one public network gateway is responsible for the communication between a plurality of virtual switches and a public network; and/or one of the virtual switches is responsible for communication between a plurality of the virtual machines and the public network gateway.
According to a second aspect of the present disclosure, there is provided a method for implementing communication between a virtual machine and a public network, including:
in the case that the virtual machine accesses the public network: the virtual switch receives a first message sent to a public network by a virtual machine, wherein source network address information contained in the first message is a private network IP address of the virtual machine; the virtual switch performs source network address conversion processing on the first message and then forwards the first message to the public network gateway, wherein the source network address conversion processing converts source network address information in the first message into a public network IP address of the virtual machine; the public network gateway communicates with a public network according to the information in the first message;
and/or in case the public network accesses the virtual machine: the public network gateway receives a second message sent to the virtual machine by the public network and forwards the second message to the virtual switch, wherein the destination IP address information in the second message is the public network IP address of the virtual machine; and the virtual switch performs destination network address conversion processing on the second message and forwards the second message to the virtual machine, wherein the destination network address conversion processing converts destination IP address information in the second message into a private network IP address of the virtual machine.
Optionally, a flag field is set in a packet forwarded between the virtual switch and the public network gateway, where the flag field is set to a value indicating that the virtual switch performs network address translation processing, where the packet includes the first packet and/or the second packet, the public network gateway skips network address translation processing when resolving the flag field, and the virtual switch performs network address translation processing when resolving the flag field, where the network address translation processing includes the source network address translation processing and the destination network address translation processing.
Optionally, the virtual switch communicates with the public network gateway through a tunneling protocol, and the flag field is set in a header added to the message according to the tunneling protocol.
Optionally, the virtual switch confirms a type of a public network IP address of the virtual machine before performing the source network address conversion process or the destination network address conversion process, and performs the source network address conversion process or the destination network address conversion process only if the type of the public network IP address is a specified type.
According to a third aspect of the present disclosure, there is provided a method for implementing communication between a virtual machine and a public network, performed at a virtual switch, the method comprising:
in the case that the virtual machine accesses the public network: receiving a first message sent to a public network by a virtual machine, wherein source network address information contained in the first message is a private network IP address of the virtual machine; the source network address conversion processing is carried out on the first message and then forwarded to a public network gateway so that the public network gateway can communicate with a public network according to the information in the first message, wherein the source network address conversion processing converts the source network address information in the first message into a public network IP address of the virtual machine;
and/or in case the public network accesses the virtual machine: receiving a second message sent to the virtual machine by the public network from the public network gateway, wherein the destination IP address information in the second message is the public network IP address of the virtual machine; and executing target network address conversion processing on the second message and then forwarding the second message to the virtual machine, wherein the target network address conversion processing converts target IP address information in the second message into a private network IP address of the virtual machine.
According to a fourth aspect of the present disclosure, there is provided a method for implementing communication between a virtual machine and a public network, performed at a public network gateway, the method comprising:
in the case that the virtual machine accesses the public network: receiving a first message from a virtual switch; in the case where it is determined that the virtual switch has performed the source network address translation process of translating source network address information in the first message into a public network IP address of the virtual machine, skipping the source network address translation process and communicating with the public network according to the public network IP address in the first message,
and/or in case the public network accesses the virtual machine: receiving a second message sent to the virtual machine by the public network; and under the condition that the destination network address conversion processing is executed by the virtual switch, skipping the destination network address conversion processing and forwarding a second message to the virtual switch, wherein the destination network address conversion processing converts the destination IP address information in the second message into a private network IP address of the virtual machine.
According to a fifth aspect of the present disclosure, there is provided a computing device comprising: a processor; and a memory having executable code stored thereon which, when executed by the processor, causes the processor to perform the method as described in the second to fourth aspects above.
According to a sixth aspect of the present disclosure, there is provided a computer program product comprising executable code which, when executed by a processor of an electronic device, causes the processor to perform the method as described in the second to fourth aspects above.
According to a seventh aspect of the present disclosure, there is provided a non-transitory machine-readable storage medium having stored thereon executable code which, when executed by a processor of an electronic device, causes the processor to perform the method as described in the second to fourth aspects above.
Thus, the present disclosure can better support large-scale VM-to-public network communication by performing NAT processing with a virtual switch.
Drawings
The foregoing and other objects, features and advantages of the disclosure will be apparent from the following more particular descriptions of exemplary embodiments of the disclosure as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts throughout exemplary embodiments of the disclosure.
Fig. 1 shows a schematic block diagram of a system for implementing virtual machine to public network communication according to one embodiment of the present disclosure.
Fig. 2 illustrates a portion of a VXLAN Header (VXLAN Header) that includes a tag field in accordance with one embodiment of the present disclosure.
Fig. 3 illustrates processing logic of a public network communication portion of a virtual switch in accordance with a preferred embodiment of the present disclosure, in connection with the present invention.
Fig. 4 illustrates processing logic of a public network communication portion of a public network gateway in accordance with a preferred embodiment of the present disclosure, in connection with the present invention.
Fig. 5 illustrates a structural schematic diagram of a computing device according to one embodiment of the present disclosure.
Detailed Description
Preferred embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As mentioned above, in the case where an IGW is to support public network communication of an excessively large number of VMs, especially in the case where the message parsing processing portion of the IGW is offloaded to hardware, the number of snap/DNAT entries is limited by hardware resources. In order to solve the problem of effectively supporting communication between a larger-scale VM and a public network under the condition of limited hardware resources, the three methods currently proposed generally include the following three methods: firstly, a plurality of groups of equipment are parallel, secondly, the hardware storage resources of a single equipment are expanded, and thirdly, under the condition that the hardware configuration is full, the software processing or the elephant flow issuing is changed. However, both the first and second methods face an increase in cost, and the second method may require a change in hardware design architecture if it cannot simply expand the hardware storage. And the third method can be faced with the problem that no flow configuration is issued to hardware or no flow configuration is issued to hardware, and a large amount of flow is limited by software, so that hardware resources are idle.
In view of the foregoing problems, the present invention proposes a new solution that changes the snap/DNAT processing to be performed by a virtual switch, and is capable of better supporting large-scale VM-to-public network communications. Particularly, under the condition that the message analysis processing part of the IGW is unloaded to hardware, the SNAT/DNAT table entry in the IGW can be removed, so that the IGW can support more VMs to communicate with the public network under the condition of limited hardware resources well, and the service cost is reduced.
The technical solution of the present invention will be described in detail below taking communication between an on-cloud VM providing a cloud server service and a public network as an example, but those skilled in the art should understand that the present invention is not limited to the on-cloud VM.
As previously described, in order for a cloud service provider to provide each independent ECS cloud server to multiple tenants, multiple VMs are typically run on one physical server. VM refers to a complete computer system with complete hardware system functions that is obtained on a physical machine through virtualization technology and operates in a completely isolated environment. In some cases, multiple containers may be further configured in one VM to provide services for different tenants, where the VM herein may refer to a container that is using the VM, and VMIP is the container IP.
Typically, the tenant chooses to arrange its own cloud server instance within the VPC (Virtual Private Cloud, virtual private cloud network), and therefore, the VM is typically assigned a private network IP address. If the VM needs to access the public network, a cloud service provider may apply for associating a public network IP address, for example, an EIP (Elastic IP) address, and then may communicate with the public network through NAT processing.
The elastic public network IP address, namely the EIP address, refers to public network IP address resources which can be purchased and held independently by a tenant from a cloud service provider, and can be bound or unbound on a specified VPC type ECS instance according to the requirements of the tenant.
The NAT process is a process of converting an IP address in an IP data packet header into another IP address, and includes a snap process of converting a source IP address and a DNAT process of converting a destination IP address.
Typically, VMs may communicate within a VPC private network via a VS (virtual switch) and may be connected via the VS to an IGW for implementing an interconnection of the VPC private network to which the VM belongs with a public network.
As previously described, in some embodiments, the IGW may be a hardware gateway that offloads parsing processing of a message to hardware (e.g., programmable hardware), but the invention is not limited thereto, and may take other forms of implementation, such as an IGW that is implemented in full software, or an IGW that offloads other processing portions to hardware, or the like.
VS is typically a switch function device formed on a physical device using virtualization technology for communication between VMs and other networks (e.g., public networks), and may be formed entirely of software, of software-plus-hardware, or of hardware (e.g., programmable hardware).
A system for implementing communication of a VM with a public network according to one embodiment of the present disclosure is described in detail below in conjunction with fig. 1.
As shown in fig. 1, the system for implementing communication between the VM 110 and the public network 140 includes a VS120 and an IGW 130, where the IGW 130 is used to implement interconnection between the private network to which the VM belongs and the public network 140, and the VS120 is used to forward the message between the VM 110 and the IGW 130. It will be appreciated by those skilled in the art that for purposes of more clearly illustrating the present invention, only one VM 110 and one VS120 are shown in fig. 1, but the present invention is not limited thereto, one VS120 may be responsible for communication (e.g., intra-private network communication, inter-private network communication, and/or communication with the public network, etc.) of a plurality (e.g., tens to hundreds) of VMs 110, and one IGW 130 may be responsible for communication (e.g., communication with the public network, etc.) of a plurality (e.g., millions) of VS 120.
The left and right arrows in fig. 1 represent the path of the message flowing from the VM 110 to the public network 140 and the path of the message flowing from the public network 140 to the VM 110, respectively, wherein the text beside the path indicates the current state of the message in a concise manner.
Specifically, as indicated by the left arrow in fig. 1, a message flowing from VM 110 to public network 140 (e.g., a message sent by VM 110 in the case where VM 110 accesses public network 140) is sent to VS120 first, where the SIP (source IP) address in the initial message is the private network address VM-IP of VM 110, and then at VS120, the SNAT processing is performed on the message, which converts the SIP address from the private network address VM-IP of VM 110 to the public network address (e.g., EIP address) of VM 110. The message is then forwarded by VS120 to IGW 130, where the SIP address in the message has become the public network address EIP of VM 110. The IGW 130 then forwards the received message to the public network 140 according to the information of the received message without performing an SNAT process on the message.
Similarly, as indicated by the right arrow in fig. 1, a message (e.g., a message sent by the public network 140 in the case where the public network 140 accesses the VM 110) flowing from the public network 140 to the VM 110 is sent to the IGW 130, where the DIP (destination IP) address in the initial message is the public network address EIP of the VM 110, and then the message is forwarded to the VS120 without DNAT processing at the IGW 130, that is, the DIP in the message is still the EIP. The message is then subjected to DNAT processing at VS120, which translates the DIP address from the public network address EIP of VM 110 to the private network address VM-IP. The message is then forwarded by VS120 to VM 110, at which point the DIP address in the message has become the private network address VM-IP of VM 110.
The path shown in fig. 1 from VM 110 from the left arrow to the right arrow may represent a common complete flow of VM 110 to access public network 140, whereas the path from public network 140 from the right arrow to the left arrow may represent a common complete flow of devices on public network 140 to access VM 110. The entire flow is described in detail below using the example of VM 110 accessing public network 140, and those skilled in the art will appreciate that the entire flow of devices on public network 140 accessing VM 110 is identical, but in the opposite direction. In addition, it should be understood that the flow of the VM 110 accessing the public network or the public network accessing the VM 110 is not limited thereto, and for example, in some cases, the opposite side may not be required to return the response content, and the entire flow may be different.
The process by which VM 110 accesses public network 140 may include:
1. as shown by the left arrow in fig. 1, VM 110 sends an access message for accessing public network 140 to VS120, where the SIP address in the access message is the private network address VM-IP of VM 110; VS120 then performs SNAT processing on the access message, which converts the SIP address from VM-IP, which is the private network address of VM 110, to a public network address (e.g., EIP address) of VM 110, and forwards the access message to IGW 130; then, the IGW 130 may forward the access packet to the public network 140 according to the information in the access packet without performing the SNAT processing on the access packet;
2. then, as shown by the right arrow in fig. 1, the public network 140 sends a response message returned to the VM 110 to the IGW 130, where the DIP address in the response message is the public network address EIP of the VM 110; then, after receiving the response message, IGW 130 forwards the response message to VS120 without DNAT processing, that is, the DIP in the response message is still the EIP; VS120 then performs DNAT processing on the response message, which translates the DIP address from the public network address EIP of VM 110 to the private network address VM-IP, and forwards the response message to VM 110.
From the foregoing, it can be seen that the present invention is implemented by sinking NAT processing (including SNAT and DNAT) to be performed when communication is performed between the VM 110 and the public network 140 from the IGW 130 to the VS120, that is, distributing the centralized task (NAT processing of all relevant VMs 110) of the original IGW 130 to each VS120 solution (NAT processing of the VM 110 for which the VS120 is responsible), which reduces the burden of the IGW 130, facilitates the improvement of the performance of the whole communication system, and also facilitates the system to support the communication of a larger number of VMs and the public network.
In addition, as described above, in the case where the IGW 130 is a hardware gateway that offloads the message parsing processing portion to hardware, since the IGW 130 removes the hardware resource consumption caused by the SNAT/DNAT table entry, the existing hardware architecture may not be changed to support more VM public network communication tasks under the condition of limited hardware resources, so that the service cost is reduced.
Preferably, in some embodiments, the VS120 is implemented by software or programmable hardware. Thus, by modifying the software or programmable hardware programming of VS120, it is possible to add SNAT/DNAT processing logic and corresponding SNAT/DNAT entries to VS120 to enable it to implement the NAT functions described above.
Preferably, in some embodiments, the IGW 130 includes programmable hardware for message parsing. Thus, the hardware processing logic of IGW 130 may be modified by modifying the programming of the programmable hardware (e.g., the p4 programming) to make it a decision to skip NAT functions, and the corresponding SNAT/DNAT entries may be removed from the hardware of IGW 130.
In addition, although not shown in fig. 1, in some embodiments, a flag field may be preferably set in the packet forwarded between VS120 and IGW 130, where the flag field is set to a value indicating that VS120 performs NAT processing in the case of VM communication with the public network, so that IGW 130 skips NAT processing when resolving the flag field, and VS120 performs NAT processing when resolving the flag field. Preferably, the tag field may be a TOF (TypeofFlow) field, and the TOF field may also identify traffic types, and the VS120 and IGW 130 may agree on which traffic type or types are to be NAT processed according to the present invention, i.e. the value of the TOF field identifying a certain traffic type or types corresponds to the value indicating that the VS120 is NAT processed.
In addition, although not shown in fig. 1, preferably in some embodiments VS120 communicates with IGW 130 via a tunneling protocol, and the above-described tag fields, such as the TOF fields, may be set in a header added to the message according to the tunneling protocol. Preferably, VXLAN protocol can be employed as tunneling protocol, and the TOF field is placed in the Reserved field (Reserved) in VXLAN header (VXLAN header). For example, the TOF field can be set as shown in fig. 2, where the TOF field occupies the first 4 bits (bits) in the last reserved field (8 bits) of the VXLAN header, and the rest of the 4 bits RES field remains reserved. Those skilled in the art will appreciate that the present invention is not limited to VXLAN protocols, but may also use GRE tunneling protocols, for example.
In addition, although not shown in fig. 1, preferably, in some embodiments VS120 may perform NAT processing for only a portion of the VMs, e.g., confirm the type of public network IP address of the VM before performing NAT processing, and perform NAT processing only if the type is specified.
For example, in some embodiments, there are at least three types of EIP public network addresses that a VM may bind, where a first EIP type may enable multiple ECS instances in a VPC to access a public network using the same public network address, so as to avoid potential safety hazards caused by direct exposure of the ECS instances to the public network, a second EIP type may enable ECS servers in the VPC to be accessed by public network users through service ports specified by DNAT openness, and may enable multiple ECS servers to be served openly using the same public network address, and a third EIP type may enable one public network IP address to support public network communications of only one virtual machine.
That is, the first two EIP types support 1 EIP public network address and simultaneously correspond to multiple VMs, which may involve complex processing when communicating with the public network, and may increase implementation complexity of the solution of the present invention, so, in order to implement the present invention more simply and efficiently, VS120 may be allowed to perform NAT processing only in the case of the third EIP type (i.e., 1 EIP public network address corresponds to only 1 VM). Note that 1 EIP public network address corresponds to (supports) only 1 VM, which means that 1 EIP public network address is bound to only 1 VM at one time, and does not mean that 1 EIP public network address can be bound to only 1 VM forever.
VS120 may identify the public network IP address type of the VM associated with the received message by parsing the message information (e.g., information of the VM, and/or information of the EIP (e.g., EIP type field)), etc., and perform NAT processing on the message only if the type is specified.
The processing logic of the public network communication portion of a VS (e.g., VS120 in fig. 1) in accordance with a preferred embodiment of the present disclosure is described in detail below in conjunction with fig. 3.
As shown in fig. 3, after receiving the message, the VS confirms the EIP type and the traffic type to which the message belongs in step S310. As previously described, in some embodiments, it may be confirmed whether the EIP type and the traffic type to which the message belongs are the types to which the corresponding NAT processing should be performed by parsing the message information, for example, looking at a specific field (e.g., a TOF field) in the header. As described above, in some embodiments, in the case where the EIP type is a type where only one virtual machine is supported by one public network IP address, the VS performs NAT processing, and in other cases, other processing flows are performed as shown in step S340.
When it is confirmed that the traffic is of the type flowing from VM to public network, VS proceeds to step S320, i.e. the SNAT processing is performed to convert the source IP address from the private network address VM-IP to the corresponding EIP public network address, and the packet is encapsulated according to VXLAN protocol and forwarded to IGW, where the TOF field may be set in the VXLAN encapsulation header as described above, to tell the subsequent IGW to skip the SNAT processing.
When the traffic is confirmed to be of the type of flowing from the public network into the VM, the VS proceeds to step S330, i.e., DNAT processing is performed to convert the destination IP address from the EIP public network address to the corresponding private network address VM-IP, and the VXLAN encapsulation header is stripped and forwarded to the VM. Note that although the DNAT processing and the operations of stripping the VXLAN encapsulation header are described sequentially herein, it is not meant to limit the order of execution of the two operations, and the operations may be performed in parallel or partially in parallel or sequentially, as the case may be.
Note that although only NAT processing and encapsulation/decapsulation processing of the VS are described herein, in actual situations, the VS may perform other operations required for communication between the VM and the public network, which are not mentioned here, regardless of the present invention.
Processing logic of the public network communication portion of an IGW (e.g., IGW 130 of fig. 1) in accordance with a preferred embodiment of the present disclosure is described in detail below in connection with fig. 4.
As shown in fig. 4, after the IGW receives the packet, the EIP type and the traffic type to which the packet belongs are confirmed in step S410. As previously described, in some embodiments, it may be confirmed whether the EIP type and the traffic type to which the message belongs are of a type that should skip the corresponding NAT process by parsing the message information, e.g., looking at a specific field (e.g., a TOF field) in the header. As described above, in some embodiments, in the case where the EIP type is a type where only one virtual machine is supported by one public network IP address, the IGW skips NAT processing, and in other cases, walks other processing flows as shown in step S440.
When it is confirmed that the traffic is of the type flowing from the VM to the public network, the IGW proceeds to step S420, i.e., skips the snap process, peels off the VXLAN encapsulation header, and forwards to the public network. Note that although the operations of skipping the snap process and stripping the VXLAN encapsulation head are described sequentially herein, it is not meant to limit the execution order of these two operations, and these operations may be performed in parallel or partially in parallel or sequentially, as the case may be.
When it is confirmed that the traffic is of the type flowing from the public network into the VM, the IGW proceeds to step S430, i.e. skips DNAT processing, encapsulates the message according to the VXLAN protocol and forwards the encapsulated message to the VS, where the TOF field may be set in the VXLAN encapsulation header as described above, to tell the subsequent VS to execute DNAT processing.
Note that although only the NAT skipping process and the encapsulation/decapsulation process of the IGW are described herein, in actual practice, the IGW will generally perform other operations required for the VM to communicate with the public network, and will not be mentioned here, because they are not relevant to the present invention.
Fig. 5 illustrates a schematic architecture of a computing device that may be used to implement the above-described method of virtual machine to public network communication according to an embodiment of the present disclosure. As previously described, the computing device implementing the methods of the present disclosure may be both a hardware device to which a public network gateway and/or a virtual switch in the communication system belong, or may be a computing device in the system independent of other computing devices than those hardware devices.
Referring to fig. 5, a computing device 500 includes a memory 510 and a processor 520.
Processor 520 may be a multi-core processor or may include multiple processors. In some embodiments, processor 520 may comprise a general-purpose host processor and one or more special coprocessors such as, for example, a Graphics Processor (GPU), a Digital Signal Processor (DSP), etc. In some embodiments, processor 520 may be implemented using custom circuitry, for example, an application specific integrated circuit (ASIC, application Specific Integrated Circuit) or a field programmable gate array (FPGA, field Programmable Gate Arrays).
Memory 510 may include various types of storage units, such as system memory, read Only Memory (ROM), and persistent storage. Where the ROM may store static data or instructions that are required by the processor 520 or other modules of the computer. The persistent storage may be a readable and writable storage. The persistent storage may be a non-volatile memory device that does not lose stored instructions and data even after the computer is powered down. In some embodiments, the persistent storage device employs a mass storage device (e.g., magnetic or optical disk, flash memory) as the persistent storage device. In other embodiments, the persistent storage may be a removable storage device (e.g., diskette, optical drive). The system memory may be a read-write memory device or a volatile read-write memory device, such as dynamic random access memory. The system memory may store instructions and data that are required by some or all of the processors at runtime. Furthermore, memory 510 may include any combination of computer-readable storage media, including various types of semiconductor memory chips (DRAM, SRAM, SDRAM, flash memory, programmable read-only memory), magnetic disks, and/or optical disks may also be employed. In some embodiments, memory 510 may include a readable and/or writable removable storage device such as a Compact Disc (CD), a read-only digital versatile disc (e.g., DVD-ROM, dual layer DVD-ROM), a read-only blu-ray disc, an ultra-dense disc, a flash memory card (e.g., SD card, min SD card, micro-SD card, etc.), a magnetic floppy disk, and the like. The computer readable storage medium does not contain a carrier wave or an instantaneous electronic signal transmitted by wireless or wired transmission.
The memory 510 has executable code stored thereon that, when processed by the processor 520, causes the processor 520 to perform the virtual machine communication method described above with the public network.
The system and method for implementing communication between a virtual machine and a public network according to the present invention have been described in detail above with reference to the accompanying drawings.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region, and provide corresponding operation entries for the user to select authorization or rejection.
Furthermore, the method according to the invention may also be implemented as a computer program or computer program product comprising computer program code instructions for performing the steps defined in the above-mentioned method of the invention.
Alternatively, the invention may also be embodied as a non-transitory machine-readable storage medium (or computer-readable storage medium, or machine-readable storage medium) having stored thereon executable code (or a computer program, or computer instruction code) which, when executed by a processor of an electronic device (or computing device, server, etc.), causes the processor to perform the steps of the above-described method according to the invention.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems and methods according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description of embodiments of the invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (14)

1. A system for enabling a virtual machine to communicate with a public network, comprising:
the public network gateway is used for realizing interconnection between a private network and a public network to which the virtual machine belongs; and
the virtual switch is used for forwarding messages between the virtual machine and the public network gateway;
wherein the virtual switch is configured to perform a network address translation process on messages transmitted between the virtual machine and the public network gateway, the network address translation process including a translation process between a private network IP address of the virtual machine and a public network IP address of the virtual machine.
2. The system of claim 1, wherein,
the network address translation process includes a source network address translation process that translates source network address information in the message from a private network IP address of the virtual machine to the public network IP address, and a destination network address translation process that translates destination IP address information in the message from the public network IP address of the virtual machine to the private network IP address,
the virtual switch is configured to:
under the condition that the message received from the virtual machine is a message flowing to a public network, the source network address conversion processing is executed and then the message is forwarded to the public network gateway;
and under the condition that the message received from the public network gateway is a message flowing from the public network to the virtual machine, executing the destination network address conversion processing and forwarding to the virtual machine.
3. The system of claim 1, wherein,
a tag field is set in a message forwarded between the virtual switch and the public network gateway, the tag field is set to a value representing that the virtual switch performs network address translation processing,
and the public network gateway skips the network address conversion processing when resolving the mark field, and the virtual switch performs the network address conversion processing when resolving the mark field.
4. The system of claim 3, wherein,
the virtual switch communicates with the public network gateway through tunneling protocol, and
and setting the mark field in a header added to the message according to the tunneling protocol.
5. The system of claim 4, wherein the tunneling protocol is VXLAN protocol and the tag field is at least a portion of a reserved field in a VXLAN header.
6. The system of claim 1, wherein,
the virtual switch is configured to confirm a type of a public network IP address of the virtual machine before performing the network address translation process, and to perform the network address translation process only if the type of the public network IP address is a specified type.
7. The system of claim 6, wherein,
the specified type includes a type in which only one virtual machine is supported by one public network IP address.
8. The system of claim 1, wherein,
the public network gateway comprises hardware for analyzing and processing the message; and/or
One public network gateway is responsible for the communication between a plurality of virtual switches and a public network; and/or
One of the virtual switches is responsible for communication of a plurality of the virtual machines with the public network gateway.
9. A method for enabling a virtual machine to communicate with a public network, comprising:
in the case that the virtual machine accesses the public network:
the virtual switch receives a first message sent to a public network by a virtual machine, wherein source network address information contained in the first message is a private network IP address of the virtual machine;
the virtual switch performs source network address conversion processing on the first message and then forwards the first message to the public network gateway, wherein the source network address conversion processing converts source network address information in the first message into a public network IP address of the virtual machine;
the public network gateway communicates with a public network according to the information in the first message,
and/or in case the public network accesses the virtual machine:
the public network gateway receives a second message sent to the virtual machine by the public network and forwards the second message to the virtual switch, wherein the destination IP address information in the second message is the public network IP address of the virtual machine;
and the virtual switch performs destination network address conversion processing on the second message and forwards the second message to the virtual machine, wherein the destination network address conversion processing converts destination IP address information in the second message into a private network IP address of the virtual machine.
10. The method of claim 9, wherein,
a mark field is arranged in a message forwarded between the virtual switch and the public network gateway, and the mark field is set to a value for representing the virtual switch to perform network address conversion processing, wherein the message comprises the first message and/or the second message;
and the public network gateway skips network address conversion processing when resolving to the mark field, and the virtual switch performs network address conversion processing when resolving to the mark field, wherein the network address conversion processing comprises the source network address conversion processing and the destination network address conversion processing.
11. The method of claim 10, wherein,
the virtual switch communicates with the public network gateway through tunneling protocol, and
and setting the mark field in a header added to the message according to the tunneling protocol.
12. The method of claim 9, wherein,
the virtual switch confirms the type of the public network IP address of the virtual machine before executing the source network address translation process or the destination network address translation process, and executes the source network address translation process or the destination network address translation process only if the type of the public network IP address is a specified type.
13. A method for enabling a virtual machine to communicate with a public network, the method performed at a virtual switch, the method comprising:
in the case that the virtual machine accesses the public network:
receiving a first message sent to a public network by a virtual machine, wherein source network address information contained in the first message is a private network IP address of the virtual machine;
the source network address conversion processing is carried out on the first message and then forwarded to a public network gateway so that the public network gateway can communicate with a public network according to the information in the first message, wherein the source network address conversion processing converts the source network address information in the first message into a public network IP address of the virtual machine,
and/or in case the public network accesses the virtual machine:
receiving a second message sent to the virtual machine by the public network from the public network gateway, wherein the destination IP address information in the second message is the public network IP address of the virtual machine;
and executing target network address conversion processing on the second message and then forwarding the second message to the virtual machine, wherein the target network address conversion processing converts target IP address information in the second message into a private network IP address of the virtual machine.
14. A method for enabling a virtual machine to communicate with a public network, the method performed at a public network gateway, the method comprising:
in the case that the virtual machine accesses the public network:
receiving a first message from a virtual switch;
in the case where it is determined that the virtual switch has performed the source network address translation process of translating source network address information in the first message into a public network IP address of the virtual machine, skipping the source network address translation process and communicating with the public network according to the public network IP address in the first message,
and/or in case the public network accesses the virtual machine:
receiving a second message sent to the virtual machine by the public network;
and under the condition that the destination network address conversion processing is executed by the virtual switch, skipping the destination network address conversion processing and forwarding a second message to the virtual switch, wherein the destination network address conversion processing converts the destination IP address information in the second message into a private network IP address of the virtual machine.
CN202310120457.5A 2023-01-18 2023-01-18 System and method for implementing virtual machine to public network communication Pending CN116170406A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310120457.5A CN116170406A (en) 2023-01-18 2023-01-18 System and method for implementing virtual machine to public network communication

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310120457.5A CN116170406A (en) 2023-01-18 2023-01-18 System and method for implementing virtual machine to public network communication

Publications (1)

Publication Number Publication Date
CN116170406A true CN116170406A (en) 2023-05-26

Family

ID=86411030

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310120457.5A Pending CN116170406A (en) 2023-01-18 2023-01-18 System and method for implementing virtual machine to public network communication

Country Status (1)

Country Link
CN (1) CN116170406A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116582516A (en) * 2023-07-12 2023-08-11 腾讯科技(深圳)有限公司 Data transmission method, device, system, medium and program product

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104601432A (en) * 2014-12-31 2015-05-06 杭州华三通信技术有限公司 Method and device for transmitting message
CN108566445A (en) * 2018-03-15 2018-09-21 华为技术有限公司 A kind of message transmitting method and device
US20190104050A1 (en) * 2017-10-02 2019-04-04 Nicira, Inc. Routing data message flow through multiple public clouds
CN111327720A (en) * 2020-02-21 2020-06-23 北京百度网讯科技有限公司 Network address conversion method, device, gateway equipment and storage medium
CN114640557A (en) * 2022-03-18 2022-06-17 阿里云计算有限公司 Gateway and cloud network system
CN115225634A (en) * 2022-06-17 2022-10-21 北京百度网讯科技有限公司 Data forwarding method and device under virtual network and computer program product

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104601432A (en) * 2014-12-31 2015-05-06 杭州华三通信技术有限公司 Method and device for transmitting message
US20190104050A1 (en) * 2017-10-02 2019-04-04 Nicira, Inc. Routing data message flow through multiple public clouds
CN108566445A (en) * 2018-03-15 2018-09-21 华为技术有限公司 A kind of message transmitting method and device
CN111327720A (en) * 2020-02-21 2020-06-23 北京百度网讯科技有限公司 Network address conversion method, device, gateway equipment and storage medium
CN114640557A (en) * 2022-03-18 2022-06-17 阿里云计算有限公司 Gateway and cloud network system
CN115225634A (en) * 2022-06-17 2022-10-21 北京百度网讯科技有限公司 Data forwarding method and device under virtual network and computer program product

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116582516A (en) * 2023-07-12 2023-08-11 腾讯科技(深圳)有限公司 Data transmission method, device, system, medium and program product
CN116582516B (en) * 2023-07-12 2023-09-19 腾讯科技(深圳)有限公司 Data transmission method, device, system, medium and program product

Similar Documents

Publication Publication Date Title
CN107566441B (en) Method and system for fast routing transmission between virtual machine and cloud service computing device
CN106998286B (en) VX L AN message forwarding method and device
CN107645444B (en) System, device and method for fast routing transmission between virtual machines and cloud service computing devices
US9602307B2 (en) Tagging virtual overlay packets in a virtual networking system
US10972549B2 (en) Software-defined networking proxy gateway
US9143582B2 (en) Interoperability for distributed overlay virtual environments
US11095534B1 (en) API-based endpoint discovery of resources in cloud edge locations embedded in telecommunications networks
US20120117563A1 (en) Overload control in a cloud computing environment
US10057162B1 (en) Extending Virtual Routing and Forwarding at edge of VRF-aware network
CN109937401A (en) Via the real-time migration for the load balancing virtual machine that business bypass carries out
CN101924693A (en) Be used for method and system in migrating processes between virtual machines
US20200110626A1 (en) Rdma with virtual address space
US10999244B2 (en) Mapping a service into a virtual network using source network address translation
CN113326228A (en) Message forwarding method, device and equipment based on remote direct data storage
US10129144B1 (en) Extending virtual routing and forwarding using source identifiers
CN112333135B (en) Gateway determination method, device, server, distributor, system and storage medium
US20210281443A1 (en) Systems and methods for preserving system contextual information in an encapsulated packet
CN111694519B (en) Method, system and server for mounting cloud hard disk on bare metal server
CN116170406A (en) System and method for implementing virtual machine to public network communication
CN108810183B (en) Method and device for processing conflicting MAC addresses and machine-readable storage medium
CN113273154A (en) Method, apparatus, and computer-readable storage medium for network control
CN111262771B (en) Virtual private cloud communication system, system configuration method and controller
CN108259350B (en) Message transmission method and device and machine-readable storage medium
CN113839876A (en) Transmission path optimization method and equipment for internal network
US10855612B2 (en) Suppressing broadcasts in cloud environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination