CN113709220A - High-availability realization method and system of virtual load balancer and electronic equipment - Google Patents

High-availability realization method and system of virtual load balancer and electronic equipment Download PDF

Info

Publication number
CN113709220A
CN113709220A CN202110935111.1A CN202110935111A CN113709220A CN 113709220 A CN113709220 A CN 113709220A CN 202110935111 A CN202110935111 A CN 202110935111A CN 113709220 A CN113709220 A CN 113709220A
Authority
CN
China
Prior art keywords
node
virtual load
load balancer
nodes
sdn controller
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110935111.1A
Other languages
Chinese (zh)
Other versions
CN113709220B (en
Inventor
廖桥生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huayun Data Holding Group Co Ltd
Original Assignee
Huayun Data Holding Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huayun Data Holding Group Co Ltd filed Critical Huayun Data Holding Group Co Ltd
Priority to CN202110935111.1A priority Critical patent/CN113709220B/en
Publication of CN113709220A publication Critical patent/CN113709220A/en
Application granted granted Critical
Publication of CN113709220B publication Critical patent/CN113709220B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1029Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers using data related to the state of servers by a load balancer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/101Server selection for load balancing based on network conditions

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention belongs to the technical field of computers, and provides a high-availability implementation method, a high-availability implementation system and electronic equipment of a virtual load balancer, wherein the high-availability implementation method comprises the steps of independently monitoring state information of all virtual load balancers deployed in each LB node through a state monitoring module independently deployed in each LB node, and reporting the state information to an SDN controller; and the SDN controller generates a switching strategy according to the state information of the virtual load balancer in each LB node, and reselects and/or creates a new main virtual load balancer in the appointed LB node after issuing the switching strategy to the appointed LB node. In the application, the high availability of data message forwarding is realized in a forwarding layer by an access request initiated by a user to a virtual machine deployed by a computing node in a computer system such as a cloud platform based on an SDN architecture, and various types of resource requirements and resource waste required for detecting an LB node are reduced.

Description

High-availability realization method and system of virtual load balancer and electronic equipment
Technical Field
The invention relates to the technical field of computers, in particular to a high-availability implementation method and system of a virtual load balancer and electronic equipment.
Background
With the development of cloud computing and computer virtualization technologies, it is a trend to extract and virtualize network functions of physical network devices and operate them on a common physical platform, and the trend is NFV (network function virtualization). The NFV aims to bear various network software functions based on a general physical platform and a virtualization technology, and realize flexible loading of software so as to meet flexible configuration in scenes of a data center and a wide area network, so that functions of network equipment do not depend on special physical equipment, network deployment is accelerated, and complexity of service deployment is reduced. A virtual load balancer is a typical device of NFV.
A Virtual Load Balancing (vLB) is a core network service that distributes visited network traffic among multiple servers running the same application, and acts as a reverse proxy to distribute network or application traffic among the multiple servers. vLB is often used to increase the access capacity (number of concurrent users) and reliability of applications or to improve the overall performance of applications by reducing the load on servers. In an SDN (Soft Defined Network) environment, in order to meet high availability of a Network, when one virtual load balancer is created, multiple virtual load balancer instances need to be created at the same time, the role of one virtual load balancer is Defined as a main virtual load balancer, and the roles of other virtual load balancers are Defined as standby virtual load balancers, so as to form a main/standby virtual load balancer. When the main virtual load balancer interrupts service due to failure, one of the standby virtual load balancer daemon threads (for example, Keepalive threads) is quickly switched to a new main virtual load balancer, so that the load balancing service is ensured not to be interrupted. However, if the daemon thread is configured for each main virtual load balancer and each standby virtual load balancer, a large number of daemon threads need to be created and established. The daemon threads monitor not only the state of each virtual load balancer, but also regularly monitor the running state of a node (namely an LB node) deploying the virtual load balancers, and ensure the network connectivity detection of the LB node, a computing node and a control node positioned on the upper layer. Therefore, computing resources, storage resources and network resources of the cloud platform are greatly wasted.
Chinese patent publication No. CN111866064A discloses a method, apparatus and system for load balancing. The prior art aims to solve the problem that the access response returned to the client by the back-end server based on the DR mode does not need to pass through the load balancer, but the back-end server and the load balancer cannot be deployed across network segments, so that the networking limitation of a load balancing system is solved. Thus, this prior art technique fails to achieve the highly available essential purpose of a virtual load balancer.
In view of the above, there is a need to improve the high available implementation of the virtual load balancer in the prior art to solve the above problems.
Disclosure of Invention
The invention aims to disclose a high-availability implementation method of a virtual load balancer, a computer system and an electronic device, which are used for avoiding creating and maintaining excessive daemon processes for detecting the state of the virtual load balancer in computer systems such as a cloud platform based on an SDN network and the like so as to reduce various types of resource requirements and resource waste required for detecting LB nodes.
To achieve one of the above objects, the present invention first provides a method for implementing a virtual load balancer, comprising the following steps:
s1, independently monitoring the state information of all deployed virtual load balancers in each LB node through a state monitoring module independently deployed in the LB node, and reporting the state information to an SDN controller;
and S2, the SDN controller generates a switching strategy according to the state information of the virtual load balancer in each LB node, and after the switching strategy is issued to the appointed LB node, a new main virtual load balancer is reselected and/or created in the appointed LB node.
As a further improvement of the present invention, in step S1, the state monitoring module independently monitors state information of all deployed virtual load balancers in the LB nodes, LB node network connectivity state information, and LB node operating state information, and reports the state information, the LB node network connectivity state information, and the LB node operating state information to the SDN controller at regular time through a management network card independently deployed in each LB node;
in step S2, the generating, by the SDN controller, a switching policy according to the state information of the virtual load balancer in each LB node includes: when any one of the state information of the virtual load balancer, the LB node network communication state information or the LB node running state information is abnormal, triggering the SDN controller to generate an event of a switching strategy.
As a further improvement of the present invention, after the step S2, the method further includes: and the SDN controller issues a flow table updating notice to the LB node to form an event of a flow table in the data forwarding equipment of the east-west data forwarding link.
As a further improvement of the present invention, the data forwarding device includes a virtual network switch or a virtual router, the data forwarding device stores the flow table, the flow table is connected to an independent virtual load balancer through an independent Tap port, and all virtual load balancers in each LB node are managed by the load balancer monitoring module in the LB node to which the virtual load balancer belongs.
As a further improvement of the present invention, after the step S2, the method further includes: and issuing a flow forwarding strategy to ovs-agents in the LB node by the SDN controller, wherein the ovs-agents modify forwarding items in the flow table according to the flow forwarding strategy so as to delete a flow table matching rule of a main virtual load balancer issued by a switching strategy in a state before the appointed LB node, and write the flow table matching rule into a new main virtual load balancer created or selected in the appointed LB node, wherein the flow table matching rule consists of a Tap port name and an action.
As a further improvement of the present invention, before the step S1, the method further includes: selecting a normal-state virtual load balancer from deployed virtual load balancers of one LB node of at least two LB nodes through an SDN controller as a current-state main virtual load balancer, and selecting at least one normal-state virtual load balancer from the rest LB nodes as a current-state standby virtual load balancer.
As a further improvement of the present invention, the highly available implementation method further comprises: and storing the corresponding relation formed between the main virtual load balancer and the standby virtual load balancer in the current state to an SDN controller, and updating the corresponding relation after reselecting and/or creating a new main virtual load balancer in the appointed LB node.
As a further improvement of the present invention, the reselecting and/or creating a new master virtual load balancer in the designated LB node comprises:
screening out LB nodes with the minimum number of created virtual load balancers by the SDN controller, and creating a new virtual load balancer in the LB nodes with the minimum number of created virtual load balancers to serve as a main virtual load balancer or a standby virtual load balancer.
As a further improvement of the present invention, after reselecting and/or creating a new master virtual load balancer in the designated LB node, the method further comprises:
and (4) reselecting and/or creating a new standby virtual load balancer in all LB nodes.
As a further improvement of the present invention, said reselecting and/or creating a new standby virtual load balancer from all LB nodes comprises:
selecting one LB node which is used as a main virtual load balancer and the least standby virtual load balancer from all LB nodes as the standby virtual load balancer in the current state by the SDN controller, and using the standby virtual load balancer which is used as the standby virtual load balancer in the current state and is in a normal state as the main virtual load balancer;
selecting and/or creating a plurality of new virtual load balancers from the rest LB nodes with normal states as standby virtual load balancers, and storing the corresponding relation formed by the standby virtual load balancers and the newly selected and/or created new main virtual load balancers in the appointed LB nodes to an SDN controller.
As a further improvement of the present invention, the LB node is configured with a management network card containing ovs-agents and a status monitoring module, the management network card is connected to the SDN controller, the management network card receives preconfigured information of normal status and abnormal status, which is configured by a user to each LB node in advance, through the SDN controller, the preconfigured information includes a predefined value indicating whether any one of status information of a virtual load balancer, LB node network connectivity status information, or LB node operation status information is normal. .
Based on the same purpose, the invention also discloses a computer system, comprising:
the method comprises the following steps that a control node, a plurality of computing nodes and a plurality of LB nodes of a state monitoring module are deployed independently, and the computing nodes and the LB nodes are accessed into an intranet switch together;
the state monitoring module independently monitors state information of all deployed virtual load balancers in the LB node and reports the state information to an SDN controller;
and the SDN controller generates a switching strategy according to the state information of the virtual load balancer in each LB node, and reselects and/or creates a new main virtual load balancer in the appointed LB node after issuing the switching strategy to the appointed LB node.
As a further improvement of the present invention, the computer system comprises a number of network nodes, the network nodes deploying at least one LB node.
Finally, the invention also discloses an electronic device comprising:
processor, memory device comprising at least one memory unit, and
a communication bus establishing a communication connection between the processor and the storage device;
the processor is configured to execute one or more programs stored in the storage device to implement the steps of the high availability implementation method of the load balancer as described in any one of the above.
Compared with the prior art, the invention has the beneficial effects that:
in the application, when all deployed virtual load balancers in an LB node are independently monitored to be abnormal through a state monitoring module in the LB node, a new main virtual load balancer can be reselected and/or created through an SDN controller to construct a new main/standby virtual load balancer, so that high availability of data message forwarding on a forwarding layer is realized in a computer system such as a cloud platform based on an SDN framework by an access request initiated by a user to a virtual machine deployed by a computing node; meanwhile, since the daemon process does not need to be established and maintained for the virtual load balancer, various types of resource requirements and resource waste required by the detection of the LB node are reduced.
Drawings
FIG. 1 is an overall flow chart of a highly available implementation of the virtual load balancer of the present invention;
FIG. 2 is an overall topology diagram of a computer system running a highly available implementation of a virtual load balancer of the present invention in one embodiment, wherein the Tunnel in FIG. 2 is formed by the east-west data links of an intranet switch;
figure 3 is a detailed topology diagram of a computer system formed by two LB nodes and an SDN controller in one embodiment;
fig. 4 is a schematic diagram of a north-south data link formed by switching and forwarding an access request initiated by a user after switching a main/standby load balancer 351 in an LB node 30 and a load balancer 461 in an LB node 40 when a virtual machine deployed in a computing node passes through an intranet switch and an LB intranet VIP (intranet virtual IP address) with an access IP address of 192.168.1.100, where a dotted line is a data forwarding path before switching the main/standby load balancer, a dash-dot line is a data forwarding path after switching the main/standby virtual load balancer, and the VIP (virtual IP) is an IP address where the virtual load balancer provides a service for a client;
FIG. 5 is a topology diagram of an LB cluster connected to a compute node and containing three LB nodes;
FIG. 6 is an overall topology diagram of a computer system running a highly available implementation of a virtual load balancer of the present invention in another embodiment;
FIG. 7 is a topology diagram of an electronic device of the present invention.
Detailed Description
The present invention is described in detail with reference to the embodiments shown in the drawings, but it should be understood that these embodiments are not intended to limit the present invention, and those skilled in the art should understand that functional, methodological, or structural equivalents or substitutions made by these embodiments are within the scope of the present invention.
Before describing in detail the various embodiments of the present application, it is necessary to set forth the meanings of the main technical terms and abbreviations involved.
The term "Igw" refers to an intranet gateway.
The term "Vgw" refers to a virtual gateway.
The term "VM" refers to a virtual machine, and in the present application, an intranet may be composed of one or more VMs in a computing node, and the intranet may be a virtual intranet or a virtual local area network.
The term "SDN" refers to a software defined network, which is logically composed of a cooperative application layer, a control layer, and a forwarding layer. Software (APP) in the cooperative application layer is accessed to a repeater in the forwarding layer through an SDN controller and is responsible for executing forwarding of user data, and forwarding table entries required in the forwarding process are generated by the SDN control in the control layer.
The term "computing node" refers to a node or server in a cloud platform that provides computing services.
The term "connection" may refer to a connection on a computer topology, an electrical connection, a unidirectional data transmission and/or a bidirectional data transmission formed based on messages or data links.
The term "LB node" refers to a node providing a virtual load balancing service in a cloud platform, and is different from a network node that deploys a physical router in the cloud platform and accesses an external network, so that the LB node is a virtual device in the present application and is a concept parallel to a computing node and a storage node in the cloud platform.
The term "control node" refers to a node or server that deploys and runs SDN control services.
The term "Virtual Load balancer" (vLB) is a software application delivery product. The virtual load balancer (vLB) in the embodiments of the present application relates to only one application deployed through virtualization technology.
The following describes the detailed implementation of the invention by using a plurality of embodiments.
The first embodiment is as follows:
referring to fig. 1 to fig. 5, this embodiment discloses a specific implementation of a highly available implementation method (hereinafter referred to as "method") of a virtual load balancer according to the present invention. The highly available implementation method of the virtual load balancer includes the following steps S1 and S2.
The virtual load balancer (vLB) automatically distributes access traffic generated based on various access requests initiated by a client to a virtual machine VM deployed in a cloud server (or a computing node) located at a back end to a plurality of cloud servers (or computing nodes), thereby expanding the response capability of an application system (e.g., a cloud computing platform) to the client and realizing higher-level application fault tolerance. Meanwhile, the vLB implements a core network service for distributing visited network traffic among a plurality of cloud servers running the same application, and the vLB can serve as a reverse proxy (e.g., Nginx) to distribute network or application traffic among a plurality of cloud servers (or computing nodes) for increasing access capacity (number of concurrent users) and reliability of the application, and at the same time, the vLB can also improve overall performance of the application system by reducing load of the cloud servers (or computing nodes). In the application, a function of service consistency is provided through a main/standby virtual load balancer, the virtual load balancer performs header rewriting by reading information in a request sent by a client and then sends the request to a proper cloud server, so that high concurrency, high availability and quick response of an access request initiated by a user at the client are improved.
Of course, in order to achieve the purpose of high availability, it is generally necessary to deploy a primary virtual load balancer and one (or more) standby virtual load balancers for forwarding requests of an access request initiated by a user at a client (not shown) to a virtual machine in a cloud server at the backend to form a corresponding relationship of the primary/standby virtual load balancers. The applicant indicates that the corresponding relationship of the master/slave virtual load balancer changes with the change of the network environment, and will be described in detail below. In this embodiment, the virtual load balancer may use HAProxy or LVS, and need not deploy and maintain Keepalive threads that consume various types of resources, such as computing resources, storage resources, and network bandwidth resources. In particular, the high availability method disclosed in the present embodiment excludes the application of the DR mode (direct routing mode) of the cloud computing network.
Step S1, the state monitoring modules independently deployed in each LB node independently monitor state information of all virtual load balancers deployed in the LB node, and report the state information to the SDN controller 11. The state monitoring module may be regarded as a monitoring process having functions of monitoring and checking network connectivity and LB operating states, and is preferably an open source detection process, so as to cyclically read states of each LB of a current node, states of the current node, and the like, and report the states to the SDN controller 11 after being summarized. Meanwhile, the state monitoring module is different from a Keepalive process in the prior art.
Referring to fig. 3, in this embodiment, the LB node 30 deploys the data forwarding device 31 and accesses the Tunnel through Igw301 and the intranet physical network card 302, and the LB node 40 deploys the data forwarding device 41 and accesses the Tunnel through Igw401 and the intranet physical network card 402. An east-west data link is formed in the Tunnel based on an intranet switch 50. The data forwarding devices 31,41 may be considered as virtual routers (vrouters) or virtual network switches (vswitches). The LB node 30 deploys a virtual load balancer 361 to a virtual load balancer 36m, the data forwarding device 31 configures Tap ports 351 to 35m, and the parameter m is an integer greater than or equal to 2. Virtual load balancer 361 is connected to Tap port 351, and virtual load balancer 36m is connected to Tap port 35 m. Similarly, the LB node 40 deploys the virtual load balancers 461 to 46m, the data forwarding device 41 configures the Tap ports 461 to 46n, and the parameter n is an integer greater than or equal to 2. Virtual load balancer 461 is connected to Tap port 451, and virtual load balancer 46n is connected to Tap port 45 n. It should be noted that, a paired relationship or an unpaired relationship may be between a Tap port deployed by the data forwarding device in each LB node and a virtual load balancer deployed in each LB node, and each virtual load balancer and each Tap port may be in a random relationship. The number of virtual load balancers and Tap ports may be equal or unequal, and may be created or deleted arbitrarily.
The Linux kernel is added with a TUN/TAP virtual network device driver and a character device/dev/net/TUN associated with the TUN, wherein the TUN is used as an interface for exchanging data between a user space and a kernel space. The application program of the user space can interact with the driver program in the Linux kernel through the device file, and the operation mode of the application program is the same as that of the common file. When the kernel sends the data packet to the virtual network device, the data packet is stored in a Queue (Queue) associated with the device, until the descriptor of the character device TUN is read by the user space program, the data packet is not copied to the buffer of the user space, which is equivalent to the effect that the data packet is directly sent to the user space. The principle is similar when sending a packet by system call write. The TUN/TAP driver includes a character device driver and a network card driver. The network card driving part is used for receiving the network packets from the TCP/IP protocol Stack and sending the network packets or transmitting the received network packets to a protocol Stack (Stack) for processing, and the character driving part is used for transmitting the network packets between a Linux kernel and a user mode and simulating data receiving and sending of a physical link.
Meanwhile, the LB node 30 deploys ovs-agent32, a management network card 303 and a state monitoring module 34; the LB node 40 deploys ovs-agent42, the management network card 403 and the status monitoring module 44. When a greater number of LB nodes are deployed in the computer system 200, each LB node is identically configured with reference to the above. All LB nodes access the SDN controller 11 through management network cards configured independently, the management network card 303 is connected to the state monitoring modules 34 and ovs-agent32, and the management network card 403 is connected to the state monitoring modules 44 and ovs-agent 42. ovs-Agent, namely Open vSwitch Agent, monitors forwarding rules in the flow table 311 (or the flow table 411) and modifies the forwarding rules by ovs-Agent deployment of ovs-Agent based on RPC communication, and realizes isolation of different virtual networks and forwarding control of network traffic by VLAN, GRE and VxLAN. ovs-agent manages the virtual switch or virtual network router (all are subordinate concepts of the data forwarding device) deploying the flow table 311,411 by managing the flow table 311,411, such as adding and deleting virtual ports, creating VLANs, determining forwarding policies, and the like.
The data forwarding device 31 (or the data forwarding device 41) includes a virtual network switch or a virtual router, the data forwarding device stores flow tables 311,411, the flow tables are connected with independent virtual load balancers through independent Tap ports, and all the virtual load balancers in each LB node are managed by the load balancer monitoring modules 34,44 in the belonging LB node. In the present embodiment, the applicant exemplarily shows that the data forwarding devices 31 and 41 in the present application use a virtual switch (vRouter) as an example. In fig. 4, a virtual switch flow table 311a (a subordinate concept of the flow table 311 in fig. 3) is disposed in the virtual switch 31a, and a virtual switch flow table 411a (a subordinate concept of the flow table 411 in fig. 2) is disposed in the virtual switch 41 a.
As shown in connection with FIG. 4, all LB nodes access intranet switch 50 through configured intranet physical network cards 302,402 and are coupled to compute nodes 60 through intranet switch 50. The computing node 60 deploys one or more virtual machines VM, and the virtual machines VM form local north-south data links inside the computing node through the virtual switch 62 between the virtual machines and the physical network card 601. Specifically, the virtual switch 62 is an OVS switch (Open VSwitch). The virtual switch 62 may implement a software and hardware layer to implement the functional emulation of a physical router, which belongs to a logical device. Each virtual switch 62 stores a routing Table (Table) and a forwarding Table, and different VPC networks are isolated by open source technologies such as VLAN, VXLAN, GRE, or MPLS, so that address spaces between different VPC networks can be reused, and isolation of routing and forwarding in different VPC networks is ensured.
In general, step S1 is preceded by: selecting a normal-state virtual load balancer from deployed virtual load balancers of one of at least two LB nodes as a current-state main virtual load balancer through the SDN controller 11, and selecting at least one normal-state virtual load balancer from the rest LB nodes as a current-state standby virtual load balancer. Generally, the virtual load balancer which is already deployed at a certain time point in each LB node is determined as the main virtual load balancer or the standby virtual load balancer, and then when the virtual load balancer is subsequently selected and determined as the main virtual load balancer or the standby virtual load balancer according to the state information, the waste of the already deployed virtual load balancers can be reduced by the above technical scheme, and the virtual load balancers are fully and reasonably utilized.
The LB node 30 is provided with a management network card 303 of a nanotube ovs-agent32 and a state monitoring module 34, the management network card 303 is connected with the SDN controller 11, the management network card receives pre-configuration information of normal state and abnormal state, which is configured to the LB node 30 in advance by a user, through the SDN controller 11, the pre-configuration information includes a predefined value representing whether any one of state information of a virtual load balancer, LB node network communication state information or LB node running state information is normal, and the LB node 40 is executed by referring to the scheme. The predefined values determined in the LB node 30 and the LB node 40 may be the same or different, and in turn are considered to be in compliance with the criteria determined to be the primary virtual load balancer.
Step S2, the SDN controller 11 generates a switching policy according to the state information of the virtual load balancer in each LB node, and after issuing the switching policy to the designated LB node, reselects and/or creates a new master virtual load balancer in the designated LB node. In particular, when it is determined that a new master virtual load balancer is newly determined based on the switching policy, the deployed virtual load balancer may be selected in the designated LB node, or when there is no virtual load balancer that can be selected in the designated LB node, the SDN controller 11 issues an instruction to create a Tap port to the virtual switch 31a (or the virtual switch 41a) through the management network cards 303 (or the management network cards 403) and ovs-agent32 (or ovs-agent42), and creates a new virtual load balancer in the designated LB node through the management network card 303 (or the management network card 403), so as to use the newly created virtual load balancer as a new master virtual load balancer.
Applicants have specified, among other things, that an LB node is a relative concept, and thus a specified LB node may be any one of the LB nodes in the computer system 200. Applicants will now refer to fig. 3 and 4 for a detailed description of an exemplary scenario for switching a virtual load balancer 461 originally in the LB node 30 as a master virtual load balancer 361 to a virtual load balancer 461 in the LB node 40 as a new master virtual load balancer based on a switching policy. Thus, if a new primary virtual load balancer re-determined based on the switchover policy is in the LB node 40, then the LB node 40 is the designated node, and if the LB node 40 is not suitable for determining the new primary load balancer in a picked or created manner, and determining the new primary load balancer in a picked or created manner, such as in the LB node 80 in fig. 5, then the LB node 80 is deemed to be the designated LB node. Any one of the LB clusters 300 when designated as an LB node corresponds to one LB node determined to be the primary virtual load balancer. The standard meeting the determined main virtual load balancer is preferably that any one of the state information of the virtual load balancer, the LB node network communication state information or the LB node running state information is normal. Specifically, the operation state information of the LB node may be determined individually or jointly based on one or more of a CPU utilization rate, a memory utilization rate, and a disk utilization rate of the LB node; different weighting coefficients can be set for the CPU, the memory and the disk respectively when the common determination is carried out. The LB node network communication state information comprises one or more states of a Tap port state, an intranet physical network card state or a virtual switch state which are determined independently or jointly.
The state monitoring module 34 independently monitors state information of all virtual load balancers deployed in the LB node 30, LB node network connectivity state information, and LB node operation state information, and reports the state information to the SDN controller 11 at regular time through the management network card 303 independently deployed in each LB node. The state monitoring module 44 independently monitors state information of all virtual load balancers deployed in the LB node 40, LB node network connectivity state information, and LB node operation state information, and reports the state information to the SDN controller 11 at regular time through the management network card 403 independently deployed in each LB node. The SDN controller generating the switching strategy according to the state information of the virtual load balancer in each LB node comprises the following steps: when any one of the state information of the virtual load balancer, the LB node network connectivity state information, or the LB node operation state information is abnormal, the SDN controller 11 is triggered to generate an event of the switching policy.
The high-availability implementation method further comprises: the SDN controller 11 issues a flow table update notification to the LB node to an event of a flow table in a data forwarding device in the LB node for forming an east-west data forwarding link. By the technical scheme, after the corresponding relationship between the main virtual load balancer and the standby virtual load balancer is changed, forwarding paths of data streams before and after switching are determined, and the forwarding paths are specifically shown in fig. 4.
In this embodiment, step S2 is followed by: after the states of the virtual load balancers in the LB nodes are switched, the SDN controller 11 issues a flow forwarding policy to ovs-agent32(42) in the LB nodes, ovs-agent32(42) modifies forwarding items in the flow table 311(411) according to the flow forwarding policy, so as to delete a flow table matching rule of a main virtual load balancer in a state before the switching policy is issued to a designated LB node, and write a flow table matching rule for a new main virtual load balancer created or selected in the designated LB node, wherein the flow table matching rule is composed of a Tap port name and an action.
As shown in fig. 4, the virtual load balancer 361 in the LB node 30 acts as a primary virtual load balancer before switching, and the virtual load balancer 461 in the LB node 40 acts as a backup virtual load balancer before switching, and forms a primary/backup virtual load balancer correspondence relationship. Before the switch, the forwarding entry in the virtual switch flow table 311a forwards the access request and the data flow of the VM of the virtual machine by the user to the backend virtual machine VM of the computing node 60 along the path corresponding to the dotted line in fig. 4 (i.e., the dotted line with an arrow formed by VIP192.168.1.100 pointing to the virtual machine VM in the computing node 60 via the LB node 30). After the switching of the master/slave virtual load balancers, the virtual load balancer 461 serves as a new master virtual load balancer, the virtual load balancer 361 serves as a slave virtual load balancer, or when the network connectivity status information and the operation status information of the LB node 30 are abnormal, one or more LB nodes with normal network connectivity status information and normal operation status information are reselected from the LB cluster 300 to select or recreate one or more new virtual load balancers to serve as slave virtual load balancers. At this time, the forwarding entry in the virtual switch flow table 411a forwards the access request and the data flow of the user to the VM of the virtual machine to the backend virtual machine VM of the computing node 60 along the path corresponding to the dashed line in fig. 4 (i.e., the dashed dotted line with an arrow formed by pointing the VIP192.168.1.100 to the virtual machine VM in the computing node 60 via the LB node 40). In the above process, the action in the flow table matching rule in the virtual switch flow table 311a is modified by deny (the action in the flow table matching rule in the virtual switch flow table 311a before switching is accept), and the action in the flow table matching rule in the virtual switch flow table 411a is modified by accept (the action in the flow table matching rule in the virtual switch flow table 411a before switching is deny).
Preferably, in this embodiment, the method further includes storing a corresponding relationship formed between the primary virtual load balancer and the standby virtual load balancer in the current state in the SDN controller 11, and updating the corresponding relationship after a new primary virtual load balancer is reselected and/or created in the designated LB node. Whether to reselect or create a new virtual load balancer in the designated LB node and determine the created virtual load balancer as a new main virtual load balancer according to a subsequent determination strategy.
In an embodiment, the foregoing determination strategy is specifically described as follows.
The SDN controller 11 screens out the LB nodes with the minimum number of created virtual load balancers, and creates a new virtual load balancer in the LB nodes with the minimum number of created virtual load balancers to serve as a primary virtual load balancer or a standby virtual load balancer.
Referring to fig. 3, if 3 virtual load balancers have been deployed in the LB node 30 and 4 virtual load balancers have been deployed in the LB node 40, and when the LB node network connectivity status information and the LB node operating status information of the LB node 30 are both normal, it is preferable to create a new virtual load balancer in the LB node 30, so that resources of each LB node can be fully utilized, and it is ensured that the number of virtual load balancers in each LB node tends to be consistent, thereby being more favorable for forming a corresponding relationship of paired master/slave virtual load balancers.
After the SDN controller 11 generates an event of a switching policy, the SDN controller 11 selects one of all LB nodes that is used as a primary virtual load balancer and is the least standby virtual load balancer from among the LB nodes, and creates the primary virtual load balancer as the standby virtual load balancer in the current state, and then selects and/or creates a plurality of new virtual load balancers as the standby virtual load balancers from among the remaining LB nodes in the normal state, and stores a corresponding relationship formed by reselecting and/or creating the new primary virtual load balancer between the standby virtual load balancer and the designated LB node in the SDN controller 11. Further, after the SDN controller 11 generates an event of the switching policy, the SDN controller 11 creates a new virtual load balancer from all LB nodes that are used as standby virtual load balancers and LB nodes with the least primary virtual load balancer to be used as standby virtual load balancers. As to whether to select an already deployed virtual load balancer as a standby virtual load balancer from the remaining LB nodes in a normal state, or to create one or more new virtual load balancers as standby virtual load balancers, the SDN controller 11 may determine whether one or more pieces of state information of each LB node in the LB cluster 300 are normal.
When the LB cluster 300 includes a large number of LB nodes (see LB nodes 30, 80, and 40 in fig. 5), the problem of re-determining the number of the primary and standby virtual load balancers in the current state of the LB node of the primary virtual load balancer needs to be considered, and the excessive virtual load balancers may increase resource consumption of CPUs, memories, disks, and the like in the designated LB nodes. Of course, the LB node 80 in fig. 5 still adopts a topology similar to that of the LB node 30 or 40, and accesses the Tunnel formed by the intranet switch 50 through the intranet physical network card 801. Therefore, in the embodiment, it may be necessary to sequentially adopt the above determination policy as to whether to create a new virtual load balancer as the main virtual load balancer or select an already deployed virtual load balancer as the main virtual load balancer, so as to avoid wasting the already deployed (created) virtual load balancers, and to balance and balance the resource consumption of each LB node. Therefore, it is more beneficial for the SDN controller 11 to determine, according to the state information of the master/backup virtual load balancer, in which designated LB node a new master virtual load balancer is determined in what manner, and allow the data packet and the data stream corresponding to the access request initiated by the user to pass through the forwarding path determined by the newly determined master virtual load balancer, and forward the access request of the user to the virtual machine VM located in the back end in the computing node 60.
In summary, in this embodiment, when the state monitoring module in the LB node independently monitors that all deployed virtual load equalizers in the LB node are abnormal, a new master/standby virtual load equalizer can be constructed by reselecting and/or creating a new master virtual load equalizer through the SDN controller, so that a high availability of forwarding a data message is realized at a forwarding layer by an access request initiated by a user to a virtual machine VM deployed by a computing node in a computer system such as a cloud platform based on an SDN architecture; meanwhile, in the embodiment, since the daemon process does not need to be created and maintained for the main/standby virtual load balancer (or other created but not defined as the main/standby virtual load balancer), various types of resource requirements and resource waste required for detecting the LB node can be significantly reduced; finally, through the switching strategy and the determining strategy, reasonable selection and definition of the main/standby virtual load balancer are realized, so that high availability of the virtual load balancer is finally realized, and efficient and reliable response of the cloud computing platform to an access request initiated by a user or an administrator at a client (such as a computer and a GUI) is remarkably improved.
Example two:
referring to fig. 2, a computer system (hereinafter referred to as "system") is disclosed in this embodiment based on a highly available implementation method of a virtual load balancer disclosed in one embodiment.
In this embodiment, a computer system 200 includes: the method comprises the steps that a control node 10 of an SDN controller 11, a plurality of computing nodes 60 and a plurality of LB nodes of state monitoring modules are independently deployed, and the computing nodes and the LB nodes are jointly accessed into an intranet switch 50. The state monitoring module independently monitors state information of all deployed virtual load balancers in the LB node, and reports the state information to the SDN controller 11. The SDN controller 11 generates a switching policy according to the state information of the virtual load balancer in each LB node, and after issuing the switching policy to the designated LB node, reselects and/or creates a new main virtual load balancer in the designated LB node.
As a reasonable variation of the computer system 200 disclosed in this embodiment, referring to fig. 6, this embodiment further discloses a computer system 200A, where the computer system 200A includes a plurality of network nodes 70, 71, or even a greater number of network nodes, and at least one LB node is deployed at a network node. Specifically, the LB node 30 is deployed in the network node 70, and the LB node 40 is deployed in the network node 71; even more, LB node 40 may be stripped from network node 71 and deployed individually and collectively to the tunnels formed by intranet switch 50. Although only one compute node 60 is shown in fig. 2 and 6, one skilled in the art can reasonably predict that the computer system 200 (computer system 200A) may deploy a greater number of compute nodes and one or more virtual machines or containers in each compute node 60.
The system disclosed by the embodiment supports mainstream virtualization platforms such as VMware ESXi and Linux KVM, fully exerts the advantages of virtualization, realizes quick deployment, batch deployment, mirror image backup and quick recovery, and can be flexibly migrated. Meanwhile, rich load balancing scheduling algorithms are supported, and different algorithms can be adopted according to specific application scenarios. The support algorithm comprises: polling, weighted polling, minimum connection, weighted minimum connection, random, source address HASH, destination address HASH, source address port HASH, etc. The load balancing algorithm is suitable for load balancing of 4-7 layers of servers. Furthermore, distribution based on application features, such as HTTP header fields, content, etc., is supported for 7-tier server load balancing. Particularly, the method can more conveniently realize the rapid deployment of virtual network devices (VNFs), support a VxLAN three-layer gateway function, realize a service chain function under the intervention of the SDN controller 11, and support multiple SDN protocols such as NETCONF (a protocol for network configuration based on XML), Open Flow, and the like. Meanwhile, the computer system 200 and/or the computer system 200A disclosed in the present embodiment may be regarded as a cloud computing platform, a data center, or a cluster server.
The system disclosed in this embodiment and the first embodiment have the same technical solutions, which are described in the first embodiment and will not be described herein again.
Example three:
referring to fig. 7, the present embodiment discloses an electronic device 500, including: a processor 51, a memory 52 and a computer program stored in the memory 52 and configured to be executed by the processor 51, the processor 51 when executing the computer program performs the steps of the method for high availability implementation of a virtual load balancer as described in the first embodiment.
Specifically, the memory 52 is composed of a plurality of storage units, namely a storage unit 521 to a storage unit 52j, wherein the parameter j is a positive integer greater than or equal to two. The processor 51 and the memory 52 both have access to a system bus 53. The form of the system bus 53 is not particularly limited, I2The C bus, the SPI bus, the SCI bus, the PCI-e bus, the ISA bus, etc., and may be changed reasonably according to the specific type and application scenario requirements of the electronic device 500. Since the system bus 53 is not the point of the present application, it is not set forth in the present application. The storage unit 52 may be a physical storage unit, so that the electronic device 100 is understood as a physical computer or a computer cluster or a cluster server; meanwhile, the storage unit 52 may also be a virtual storage unit, for example, a virtual storage space formed by a bottom layer virtualization technology based on a physical storage device, so that the electronic device 100 is configured as a virtual device such as a virtual server or a virtual cluster.
Please refer to the technical solutions of the same parts in the first embodiment and/or the second embodiment of the electronic device 500 shown in this embodiment, which are not described herein again.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above-listed detailed description is only a specific description of a possible embodiment of the present invention, and they are not intended to limit the scope of the present invention, and equivalent embodiments or modifications made without departing from the technical spirit of the present invention should be included in the scope of the present invention.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present description refers to embodiments, not every embodiment may contain only a single embodiment, and such description is for clarity only, and those skilled in the art should integrate the description, and the embodiments may be combined as appropriate to form other embodiments understood by those skilled in the art.

Claims (14)

1. The high-availability realization method of the virtual load balancer is characterized by comprising the following steps:
s1, independently monitoring the state information of all deployed virtual load balancers in each LB node through a state monitoring module independently deployed in the LB node, and reporting the state information to an SDN controller;
and S2, the SDN controller generates a switching strategy according to the state information of the virtual load balancer in each LB node, and after the switching strategy is issued to the appointed LB node, a new main virtual load balancer is reselected and/or created in the appointed LB node.
2. The method according to claim 1, wherein in step S1, the state monitoring module independently monitors state information of all deployed virtual load balancers in the LB nodes, LB node network connectivity state information, and LB node operating state information, and reports the state information to the SDN controller at regular time through a management network card independently deployed in each LB node;
in step S2, the generating, by the SDN controller, a switching policy according to the state information of the virtual load balancer in each LB node includes: when any one of the state information of the virtual load balancer, the LB node network communication state information or the LB node running state information is abnormal, triggering the SDN controller to generate an event of a switching strategy.
3. The method according to claim 1, wherein the step S2 is further followed by: and the SDN controller issues a flow table updating notice to the LB node to form an event of a flow table in the data forwarding equipment of the east-west data forwarding link.
4. The method of claim 3, wherein the data forwarding device comprises a virtual network switch or a virtual router, the data forwarding device stores the flow table, the flow table is connected to an independent virtual load balancer through an independent Tap port, and all virtual load balancers in each LB node are hosted by the load balancer monitoring module in the corresponding LB node.
5. The method according to claim 3, wherein the step S2 is further followed by: and issuing a flow forwarding strategy to ovs-agents in the LB node by the SDN controller, wherein the ovs-agents modify forwarding items in the flow table according to the flow forwarding strategy so as to delete a flow table matching rule of a main virtual load balancer issued by a switching strategy in a state before the appointed LB node, and write the flow table matching rule into a new main virtual load balancer created or selected in the appointed LB node, wherein the flow table matching rule consists of a Tap port name and an action.
6. The high availability implementation method according to any one of claims 2 to 5, wherein the step S1 is preceded by: selecting a normal-state virtual load balancer from deployed virtual load balancers of one LB node of at least two LB nodes through an SDN controller as a current-state main virtual load balancer, and selecting at least one normal-state virtual load balancer from the rest LB nodes as a current-state standby virtual load balancer.
7. The high availability implementation method of claim 6, further comprising: and storing the corresponding relation formed between the main virtual load balancer and the standby virtual load balancer in the current state to an SDN controller, and updating the corresponding relation after reselecting and/or creating a new main virtual load balancer in the appointed LB node.
8. The method of claim 6, wherein reselecting and/or creating a new master virtual load balancer in the designated LB node comprises: screening out LB nodes with the minimum number of created virtual load balancers by the SDN controller, and creating a new virtual load balancer in the LB nodes with the minimum number of created virtual load balancers to serve as a main virtual load balancer or a standby virtual load balancer.
9. The method as claimed in claim 8, further comprising, after reselecting and/or creating a new master virtual load balancer in the designated LB node: and (4) reselecting and/or creating a new standby virtual load balancer in all LB nodes.
10. The method as claimed in claim 9, wherein the reselecting and/or creating a new standby virtual load balancer among all LB nodes comprises:
selecting one LB node which is used as a main virtual load balancer and the least standby virtual load balancer from all LB nodes as the standby virtual load balancer in the current state by the SDN controller, and using the standby virtual load balancer which is used as the standby virtual load balancer in the current state and is in a normal state as the main virtual load balancer;
selecting and/or creating a plurality of new virtual load balancers from the rest LB nodes with normal states as standby virtual load balancers, and storing the corresponding relation formed by the standby virtual load balancers and the newly selected and/or created new main virtual load balancers in the appointed LB nodes to an SDN controller.
11. The high availability implementation method according to claim 6, wherein the LB nodes configure management network cards of the nanotubes ovs-agent and the status monitoring module, the management network cards are connected to an SDN controller, the management network cards receive preconfigured information of normal status and abnormal status, which is configured by a user in advance for each LB node, through the SDN controller, the preconfigured information includes predefined values indicating whether any one of status information of the virtual load balancer, LB node network connectivity status information, or LB node operation status information is normal.
12. A computer system, comprising:
the method comprises the following steps that a control node, a plurality of computing nodes and a plurality of LB nodes of a state monitoring module are deployed independently, and the computing nodes and the LB nodes are accessed into an intranet switch together;
the state monitoring module independently monitors state information of all deployed virtual load balancers in the LB node and reports the state information to an SDN controller;
and the SDN controller generates a switching strategy according to the state information of the virtual load balancer in each LB node, and reselects and/or creates a new main virtual load balancer in the appointed LB node after issuing the switching strategy to the appointed LB node.
13. The computer system of claim 12, wherein the computer system comprises a number of network nodes, wherein the network nodes deploy at least one LB node.
14. An electronic device, comprising:
processor, memory device comprising at least one memory unit, and
a communication bus establishing a communication connection between the processor and the storage device;
the processor is configured to execute one or more programs stored in the storage device to implement the high-availability implementation method of the load balancer as claimed in any one of claims 1 to 11.
CN202110935111.1A 2021-08-16 2021-08-16 High-availability implementation method and system of virtual load equalizer and electronic equipment Active CN113709220B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110935111.1A CN113709220B (en) 2021-08-16 2021-08-16 High-availability implementation method and system of virtual load equalizer and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110935111.1A CN113709220B (en) 2021-08-16 2021-08-16 High-availability implementation method and system of virtual load equalizer and electronic equipment

Publications (2)

Publication Number Publication Date
CN113709220A true CN113709220A (en) 2021-11-26
CN113709220B CN113709220B (en) 2024-03-22

Family

ID=78652751

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110935111.1A Active CN113709220B (en) 2021-08-16 2021-08-16 High-availability implementation method and system of virtual load equalizer and electronic equipment

Country Status (1)

Country Link
CN (1) CN113709220B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114785671A (en) * 2022-05-18 2022-07-22 江苏安超云软件有限公司 Method, system and electronic device for realizing high availability of virtual load balancer
CN115514767A (en) * 2022-09-27 2022-12-23 上汽通用五菱汽车股份有限公司 Data transmission switching method, terminal equipment and computer readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104780115A (en) * 2014-01-14 2015-07-15 上海盛大网络发展有限公司 Load balancing method and load balancing system in cloud computing environment
CN104935672A (en) * 2015-06-29 2015-09-23 杭州华三通信技术有限公司 High available realizing method and equipment of load balancing service
CN106921553A (en) * 2015-12-28 2017-07-04 中移(苏州)软件技术有限公司 The method and system of High Availabitity are realized in virtual network
CN108063783A (en) * 2016-11-08 2018-05-22 上海有云信息技术有限公司 The dispositions method and device of a kind of load equalizer
CN109937401A (en) * 2016-11-15 2019-06-25 微软技术许可有限责任公司 Via the real-time migration for the load balancing virtual machine that business bypass carries out

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104780115A (en) * 2014-01-14 2015-07-15 上海盛大网络发展有限公司 Load balancing method and load balancing system in cloud computing environment
CN104935672A (en) * 2015-06-29 2015-09-23 杭州华三通信技术有限公司 High available realizing method and equipment of load balancing service
CN106921553A (en) * 2015-12-28 2017-07-04 中移(苏州)软件技术有限公司 The method and system of High Availabitity are realized in virtual network
CN108063783A (en) * 2016-11-08 2018-05-22 上海有云信息技术有限公司 The dispositions method and device of a kind of load equalizer
CN109937401A (en) * 2016-11-15 2019-06-25 微软技术许可有限责任公司 Via the real-time migration for the load balancing virtual machine that business bypass carries out

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114785671A (en) * 2022-05-18 2022-07-22 江苏安超云软件有限公司 Method, system and electronic device for realizing high availability of virtual load balancer
CN115514767A (en) * 2022-09-27 2022-12-23 上汽通用五菱汽车股份有限公司 Data transmission switching method, terminal equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN113709220B (en) 2024-03-22

Similar Documents

Publication Publication Date Title
CN110113441B (en) Computer equipment, system and method for realizing load balance
EP2559206B1 (en) Method of identifying destination in a virtual environment
US11895016B2 (en) Methods and apparatus to configure and manage network resources for use in network-based computing
US10949233B2 (en) Optimized virtual network function service chaining with hardware acceleration
US8027354B1 (en) Network consolidation for virtualized servers
US8613085B2 (en) Method and system for traffic management via virtual machine migration
US11153194B2 (en) Control plane isolation for software defined network routing services
CN106664216B (en) VNF switching method and device
US20100214949A1 (en) Distributed data center access switch
CN113709220B (en) High-availability implementation method and system of virtual load equalizer and electronic equipment
US11418582B1 (en) Priority-based transport connection control
CN110830574B (en) Method for realizing intranet load balance based on docker container
US11824765B2 (en) Fast redirect of traffic when pods fail
US20160205033A1 (en) Pool element status information synchronization method, pool register, and pool element
US11409621B2 (en) High availability for a shared-memory-based firewall service virtual machine
CN111835685A (en) Method and server for monitoring running state of Nginx network isolation space
CN114080785A (en) Highly scalable, software defined intra-network multicasting of load statistics
Lee et al. High-performance software load balancer for cloud-native architecture
Diab et al. Orca: Server-assisted multicast for datacenter networks
US20200028731A1 (en) Method of cooperative active-standby failover between logical routers based on health of attached services
CN116095145A (en) Data control method and system of VPC cluster
US20220283866A1 (en) Job target aliasing in disaggregated computing systems
Medhi et al. Openflow-based multi-controller model for fault-tolerant and reliable control plane
US20240179085A1 (en) Methods, systems and computer readable media for emulating physical layer impairments in a cloud computing environment
CN113992683B (en) Method, system, equipment and medium for realizing effective isolation of double networks in same cluster

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant