CN113709220B - High-availability implementation method and system of virtual load equalizer and electronic equipment - Google Patents

High-availability implementation method and system of virtual load equalizer and electronic equipment Download PDF

Info

Publication number
CN113709220B
CN113709220B CN202110935111.1A CN202110935111A CN113709220B CN 113709220 B CN113709220 B CN 113709220B CN 202110935111 A CN202110935111 A CN 202110935111A CN 113709220 B CN113709220 B CN 113709220B
Authority
CN
China
Prior art keywords
node
virtual load
load balancer
nodes
state information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110935111.1A
Other languages
Chinese (zh)
Other versions
CN113709220A (en
Inventor
廖桥生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huayun Data Holding Group Co ltd
Original Assignee
Huayun Data Holding Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huayun Data Holding Group Co ltd filed Critical Huayun Data Holding Group Co ltd
Priority to CN202110935111.1A priority Critical patent/CN113709220B/en
Publication of CN113709220A publication Critical patent/CN113709220A/en
Application granted granted Critical
Publication of CN113709220B publication Critical patent/CN113709220B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1029Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers using data related to the state of servers by a load balancer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/101Server selection for load balancing based on network conditions

Abstract

The invention belongs to the technical field of computers, and provides a high-availability implementation method, a system and electronic equipment of a virtual load equalizer, wherein the high-availability implementation method comprises the steps of independently monitoring state information of all deployed virtual load equalizers in LB nodes through state monitoring modules independently deployed in each LB node, and reporting the state information to an SDN controller; and the SDN controller generates a switching strategy according to the state information of the virtual load equalizer in each LB node, and re-selects and/or creates a new main virtual load equalizer in the designated LB node after issuing the switching strategy to the designated LB node. In the application, the access request initiated by the virtual machine deployed by the user to the computing node is realized in the cloud platform and other computer systems based on the SDN architecture, the high availability of forwarding the data message is realized in the forwarding layer, and the resource requirements and resource waste of various types required by detecting the LB node are reduced.

Description

High-availability implementation method and system of virtual load equalizer and electronic equipment
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method, a system, and an electronic device for implementing high availability of a virtual load balancer.
Background
With development of cloud computing and computer virtualization technologies, extracting and virtualizing network functions of physical network devices and running the network functions on a general physical platform become a trend and NFV (network function virtualization). The NFV aims to carry various network software functions based on a general physical platform and a virtualization technology, and realize flexible loading of software so as to meet flexible configuration in the scenes of a data center and a wide area network, and the functions of network equipment are not dependent on special physical equipment, so that network deployment is accelerated, and the complexity of service deployment is reduced. The virtual load balancer is a typical NFV device.
The virtual load balancer (Virtual Load Balancing, vLB) is a core network service that distributes future visited network traffic among multiple servers running the same application and acts as a reverse proxy to distribute network or application traffic among the multiple servers. vlbs are often used to increase the access capacity (number of concurrent users) and reliability of an application, or to increase the overall performance of an application by reducing the load on the server. In an SDN (Soft Defined Network, software defined networking) environment, to meet network availability requirements, multiple virtual load balancer instances need to be created simultaneously when one virtual load balancer is created, where one virtual load balancer role is defined as a master virtual load balancer, and the other virtual load balancers roles are defined as backup virtual load balancers, so as to form one master/backup virtual load balancer. When the main virtual load balancer interrupts service due to failure, one of the standby virtual load balancer daemon threads (e.g. Keepalive threads) is rapidly switched to a new main virtual load balancer, thereby ensuring that the load balancing service is not interrupted. However, if daemons are configured for each of the primary virtual load balancer and the backup virtual load balancer, a large number of daemons need to be created and created. These daemon threads not only monitor the state of each virtual load balancer, but also periodically monitor the running state of the node (i.e. LB node) where the virtual load balancer is deployed, and ensure the repetitive work such as network connectivity detection between the LB node and the computing node and between the LB node and the control node located at the upper layer. Therefore, great waste is caused to computing resources, storage resources and network resources of the cloud platform.
The Chinese patent publication No. CN111866064A discloses a method, a device and a system for load balancing. The prior art aims to solve the problem that although an access response returned by a back-end server to a client based on a DR mode does not need to pass through a load balancer, the back-end server and the load balancer cannot be deployed across network segments so as to solve the networking limitation of a load balancing system. Thus, this prior art does not achieve the high availability of the essential purpose of a virtual load balancer.
In view of this, there is a need for improvements in the highly available implementations of virtual load balancers in the prior art to address the above-described problems.
Disclosure of Invention
The invention aims to disclose a high-availability implementation method, a computer system and electronic equipment of a virtual load balancer, which are used for avoiding the creation and maintenance of excessive daemon used for detecting the state of the virtual load balancer in a cloud platform and other computer systems based on an SDN network so as to reduce various types of resource requirements and resource waste required by detecting LB nodes.
To achieve one of the above objects, the present invention provides a method for implementing a virtual load balancer with high availability, comprising the steps of:
S1, independently monitoring state information of all deployed virtual load balancers in each LB node through a state monitoring module independently deployed in each LB node, and reporting the state information to an SDN controller;
s2, the SDN controller generates a switching strategy according to the state information of the virtual load equalizer in each LB node, and re-selects and/or creates a new main virtual load equalizer in the designated LB node after issuing the switching strategy to the designated LB node.
In step S1, the state monitoring module monitors the state information of all deployed virtual load balancers in the LB nodes, the LB node network connection state information and the LB node operation state information independently, and reports the state information to the SDN controller at regular time through a management network card independently deployed on each LB node;
in step S2, the SDN controller generating a switching policy according to the state information of the virtual load balancer in each LB node includes: when any one of the state information of the virtual load equalizer, the LB node network connection state information or the LB node operation state information is abnormal, triggering the SDN controller to generate an event of switching strategy.
As a further improvement of the present invention, the step S2 further includes: the SDN controller issues a flow table update notification to the LB node for events of flow tables in data forwarding devices used to form east-west data forwarding links in the LB node.
As a further improvement of the present invention, the data forwarding device includes a virtual network switch or a virtual router, and the data forwarding device stores the flow table, where the flow table is connected to an independent virtual load balancer through an independent Tap port, and all virtual load balancers in each LB node are managed by the state monitoring module in the LB node to which the flow table belongs.
As a further improvement of the present invention, the step S2 further includes: and the SDN controller issues a flow forwarding strategy to a ovs-agent in the LB node, the ovs-agent modifies forwarding items in a flow table according to the flow forwarding strategy so as to delete a flow table matching rule of a main virtual load equalizer in a state before the switching strategy is issued to the appointed LB node, and writes the flow table matching rule into a new main virtual load equalizer created or selected in the appointed LB node, wherein the flow table matching rule consists of a Tap port name and an action.
As a further improvement of the present invention, the step S1 further includes: and selecting a normal virtual load balancer from deployed virtual load balancers of one of at least two LB nodes by the SDN controller as a main virtual load balancer of the current state, and selecting at least one normal virtual load balancer from the rest LB nodes as a standby virtual load balancer of the current state.
As a further improvement of the present invention, the high availability implementation method further includes: and storing the corresponding relation formed between the main virtual load equalizer and the standby virtual load equalizer in the current state to an SDN controller, and updating the corresponding relation after re-selecting and/or creating a new main virtual load equalizer in the appointed LB node.
As a further improvement of the present invention, the re-selecting and/or creating a new primary virtual load balancer in the designated LB node comprises:
screening out LB nodes with the least created virtual load balancers by the SDN controller, and creating a new virtual load balancers in the LB nodes with the least created virtual load balancers to serve as a main virtual load balancers or standby virtual load balancers.
As a further improvement of the present invention, after the re-selection and/or creation of a new primary virtual load balancer in the designated LB node, further comprises:
a new backup virtual load balancer is re-selected and/or created among all LB nodes.
As a further improvement of the present invention, the re-selecting and/or creating a new backup virtual load balancer in all LB nodes comprises:
selecting one LB node which is used as a main virtual load balancer and is least used as a standby virtual load balancer from all LB nodes by the SDN controller, wherein the LB node which is used as the standby virtual load balancer in the current state is in a normal state, and the LB node which is used as the standby virtual load balancer in the current state is used as the main virtual load balancer;
and selecting and/or creating a plurality of new virtual load balancers from the rest LB nodes with normal states as backup virtual load balancers, and storing the corresponding relation formed by the backup virtual load balancers and the newly selected and/or created new main virtual load balancers in the appointed LB nodes to an SDN controller.
As a further improvement of the present invention, the LB node configures a management network card of the nano tube ovs-agent and the state monitoring module, the management network card is connected to the SDN controller, the management network card receives preconfiguration information of normal state and abnormal state configured by a user in advance to each LB node through the SDN controller, where the preconfiguration information includes a predefined value indicating whether any one of the state information of the virtual load balancer, the LB node network connection state information or the LB node operation state information is normal. .
Based on the same object, the invention also discloses a computer system, comprising:
the method comprises the steps of deploying a control node of the SDN controller, a plurality of computing nodes and a plurality of LB nodes of the independent deployment state monitoring module, wherein the computing nodes and the LB nodes are jointly connected into an intranet switch;
the state monitoring module independently monitors state information of all deployed virtual load balancers in the LB node and reports the state information to an SDN controller;
and the SDN controller generates a switching strategy according to the state information of the virtual load equalizer in each LB node, and re-selects and/or creates a new main virtual load equalizer in the designated LB node after issuing the switching strategy to the designated LB node.
As a further improvement of the invention, the computer system comprises a number of network nodes, which network nodes deploy at least one LB node.
Finally, the invention also discloses an electronic device, which comprises:
a processor, a memory device composed of at least one memory unit, and
a communication bus establishing a communication connection between the processor and the memory device;
the processor is configured to execute one or more programs stored in the storage device to implement the steps of the high availability implementation method of the load balancer as described in any of the above.
Compared with the prior art, the invention has the beneficial effects that:
in the application, when all deployed virtual load balancers in an LB node are monitored to be abnormal independently through a state monitoring module in the LB node, a new main virtual load balancer can be selected and/or created again through an SDN controller to construct a new main/standby virtual load balancer, so that a user can access a virtual machine initiated by the computing node in a cloud platform and other computer systems based on an SDN architecture, and high availability of data message forwarding is realized in a forwarding layer; meanwhile, in the method, a daemon process does not need to be created and maintained for the virtual load equalizer, so that various types of resource requirements and resource waste required by detecting the LB node are reduced.
Drawings
FIG. 1 is an overall flow chart of a high availability implementation of the virtual load balancer of the present invention;
FIG. 2 is an overall topology of a computer system in one embodiment of a high availability implementation of a virtual load balancer for operating the present invention, wherein the Tunnel of FIG. 2 is formed by east-west data links of an intranet switch;
FIG. 3 is a detailed topology of a computer system built up of two LB nodes and an SDN controller in one embodiment;
Fig. 4 is a schematic diagram of a north-south data link formed by switching and forwarding an access request initiated by a user after a virtual load balancer 361 in an LB node 30 and a virtual load balancer 461 in an LB node 40 are switched when a virtual machine deployed in a computing node passes through an intranet switch and an LB intranet VIP (intranet virtual IP address) with an access IP address of 192.168.1.100, wherein a dotted line is a data forwarding path before the switching of the main/standby virtual load balancer, a dash-dot line is a data forwarding path after the switching of the main/standby virtual load balancer, and VIP (virtual IP) is an IP address where the virtual load balancer provides a service for a client;
FIG. 5 is a topology of an LB cluster connected to computing nodes and containing three LB nodes;
FIG. 6 is an overall topology of a computer system running a highly available implementation of a virtual load balancer of the present invention in another embodiment;
fig. 7 is a topology of an electronic device according to the present invention.
Detailed Description
The present invention will be described in detail below with reference to the embodiments shown in the drawings, but it should be understood that the embodiments are not limited to the present invention, and functional, method, or structural equivalents and alternatives according to the embodiments are within the scope of protection of the present invention by those skilled in the art.
Before explaining the various embodiments of the present application in detail, the meanings of the main technical terms and abbreviations involved are set forth as necessary.
The term "Igw" refers to an intranet gateway.
The term "Vgw" refers to a virtual gateway.
The term "VM" refers to a virtual machine, and in this application, an intranet may be formed by one or more VMs in a computing node, where the intranet may be a virtual intranet or a virtual local area network.
The term "SDN" refers to a software defined network, where SDN is logically comprised of a collaborative application layer, a control layer, and a forwarding layer. Software (APP) in the cooperative application layer is accessed to a repeater in the forwarding layer through the SDN controller and is responsible for executing the forwarding of user data, and forwarding table items required in the forwarding process are generated by SDN control in the control layer.
The term "computing node" refers to a node or server in a cloud platform that provides computing services.
The term "connection" may be either a connection on a computer topology or an electrical connection, or may be a unidirectional data transmission and/or a bidirectional data transmission based on messages or data links.
The term "LB node" refers to a node that provides virtual load balancing service in a cloud platform, and is different from a network node that deploys a physical router in the cloud platform and accesses an external network, so in this application, the LB node is a virtual device and is in parallel concept with a computing node and a storage node in the cloud platform.
The term "control node" refers to a node or server that deploys and runs SDN control services.
The term "virtual load balancer" (Virtual Load Balancing, vLB) is a software application delivery product. The virtual load balancer (vLB) in embodiments of the present application relates only to deploying one application through virtualization technology.
The following describes in detail the implementation of the invention by means of several embodiments.
Embodiment one:
referring to fig. 1 to 5, this embodiment discloses a specific implementation manner of a high availability implementation manner (hereinafter referred to as "method") of a virtual load balancer according to the present invention. The high availability implementation method of the virtual load balancer comprises the following steps S1 and S2.
The virtual load balancer (vLB) automatically distributes access traffic generated based on various access requests initiated by clients to Virtual Machines (VMs) deployed by cloud servers (or computing nodes) at the back end to a plurality of cloud servers (or computing nodes), expands the response capability of an application system (such as a cloud computing platform) to the clients, and realizes higher-level application fault tolerance. Meanwhile, the vLB realizes a core network service for distributing the visiting network traffic among a plurality of cloud servers running the same application, the vLB can serve as a reverse proxy (such as nginnx), and distributes the network or application traffic among a plurality of cloud servers (or computing nodes) for increasing the access capacity (the number of concurrent users) and reliability of the application, and meanwhile, the vLB can also improve the overall performance of an application system by reducing the load of the cloud servers (or computing nodes). In the application, the function of providing service consistency by the main/standby virtual load equalizer is realized, and the virtual load equalizer rewrites a header program by reading information in a request sent by a client and then sends the request to a proper cloud server so as to improve high concurrency, high availability and quick response to an access request initiated by a user at the client.
Of course, for the purpose of achieving high availability, it is generally required to deploy a primary virtual load balancer and one (or more) backup virtual load balancers for forwarding requests of virtual machines in a cloud server where an access request initiated by a client (not shown) is directed to a back-end by a user, so as to form a corresponding relationship of the primary/backup virtual load balancers. The applicant indicates that the correspondence between the primary/backup virtual load balancers changes with the change of the network environment, and will be described in detail below. In this embodiment, the virtual load balancer may be HAProxy or LVS, and there is no need to deploy and maintain keep alive threads that consume various types of resources such as computing resources, storage resources, and network bandwidth resources. In particular, the high availability method disclosed in this embodiment excludes the application of DR mode (direct routing mode) of the cloud computing network.
In step S1, the state information of all deployed virtual load balancers in the LB nodes is independently monitored by a state monitoring module independently deployed in each LB node, and is reported to the SDN controller 11. The state monitoring module may be regarded as a monitoring process with the functions of monitoring and checking network connectivity and LB running states, and is preferably an open source detection process, so as to circularly read the state of each LB of the current node, the state of the current node, and the like, and report the state to the SDN controller 11 after summarizing. Meanwhile, the state monitoring module is different from a Keepalive process in the prior art.
Referring to fig. 3, in this embodiment, the LB node 30 deploys the data forwarding device 31 and accesses the Tunnel through Igw301 and the intranet physical network card 302, and the LB node 40 deploys the data forwarding device 41 and accesses the Tunnel through Igw401 and the intranet physical network card 402. East-west data links are formed in Tunnel based on intranet switch 50. The data forwarding devices 31,41 may be regarded as virtual routers (vrlutters) or virtual network switches (vswitchs). The LB node 30 deploys the virtual load balancers 361 to 36m, and the data transfer device 31 configures the Tap ports 351 to 35m, and the parameter m is an integer of 2 or more. Virtual load balancer 361 is connected to Tap port 351, and virtual load balancer 36m is connected to Tap port 35m. Similarly, the LB node 40 deploys the virtual load balancers 461 to 46m, and the data forwarding device 41 configures the Tap ports 461 to 46n with the parameter n being an integer greater than or equal to 2. Virtual load balancer 461 is connected to Tap port 451, and virtual load balancer 46n is connected to Tap port 45n. It should be noted that, the Tap ports deployed by the data forwarding devices in each LB node may be in a paired relationship or an unpaired relationship with the virtual load balancers deployed in each LB node, and each virtual load balancer may be in a random relationship with each Tap port. The number of virtual load balancer and Tap ports may or may not be equal, and may be created or deleted at will.
A TUN/TAP virtual network device driver and a character device/dev/net/TUN associated with the TUN are added in the Linux kernel, and the TUN serves as an interface for exchanging data between a user space and a kernel space. The application program of the user space can interact with the driver program in the Linux kernel through the device file, and the operation mode is not different from the common file operation. When the kernel sends the data packet to the virtual network device, the data packet is stored in a Queue (Queue) associated with the device, and the data packet is not copied into the buffer area of the user space until the user space program reads through the descriptor of the open character device TUN, so that the effect is equivalent to that the data packet is directly sent to the user space. The principle is similar when sending data packets through the system call write. The TUN/TAP driver includes a character device driver and a network card driver. The network card driving part is used for receiving the network packet from the TCP/IP protocol Stack and transmitting or conversely transmitting the received network packet to the protocol Stack (Stack) for processing, and the character driving part is used for transmitting the network packet between the Linux kernel and the user mode to simulate the data receiving and transmitting of the physical link.
Meanwhile, the LB node 30 deploys ovs-agent32, management network card 303 and state monitoring module 34; the LB node 40 deploys ovs-agent42, management network card 403, and status monitoring module 44. When a greater number of LB nodes are deployed in the computer system 200, each LB node is configured identically as described above. All LB nodes are connected to the SDN controller 11 through management network cards which are configured independently, the management network card 303 is connected with the state monitoring modules 34 and ovs-agents 32, and the management network card 403 is connected with the state monitoring modules 44 and ovs-agents 42.ovs-agent is Open vSwitch Agent, and monitors monitoring of forwarding rules and modification of forwarding rules of the flow table 311 (or the flow table 411) based on RPC communication by deploying ovs-agent through ovs-agent, and realizes isolation of different virtual networks and forwarding control of network traffic through VLAN, GRE, vxLAN. ovs-agent manages virtual switches or virtual network routers (both of which are the lower concepts of data forwarding devices) deploying the flow tables 311,411 by managing the flow tables 311,411, for example, adding and deleting virtual ports, creating VLANs, determining forwarding policies, and the like.
The data forwarding device 31 (or the data forwarding device 41) comprises a virtual network switch or a virtual router, and the data forwarding device stores flow tables 311,411, and the flow tables are connected with independent virtual load balancers through independent Tap ports, and all the virtual load balancers in each LB node are received by the state monitoring modules 34,44 in the LB node to which the data forwarding device belongs. In this embodiment, the applicant has exemplified the data forwarding devices 31,41 in this application as virtual switches (vruters). In fig. 4, the virtual switch 31a deploys the virtual switch flow table 311a (a lower concept of the flow table 311 in fig. 3), and the virtual switch 41a deploys the virtual switch flow table 411a (a lower concept of the flow table 411 in fig. 2).
As shown in connection with fig. 4, all of the LB nodes access the intranet switch 50 via the configured intranet physical network cards 302,402 and are coupled to the computing node 60 via the intranet switch 50. The compute node 60 deploys one or more virtual machine VMs that form local north-south data links inside the compute node with the physical network card 601 at the virtual machine through the virtual switch 62. Specifically, the virtual switch 62 is an OVS switch (Open VSwitch). The virtual switch 62 can realize the function simulation of a physical router by a software layer and a hardware layer, and belongs to a logic device. Each virtual switch 62 stores a routing Table (Table) and a forwarding Table, and different VPC networks are isolated by open source technologies such as VLAN, VXLAN, GRE or MPLS, so that address spaces between different VPC networks can be reused, and isolation of routing and forwarding inside different VPC networks is ensured.
Typically, before step S1, the method further includes: and selecting a normal virtual load balancer from the deployed virtual load balancers of one of the at least two LB nodes as a main virtual load balancer of the current state by the SDN controller 11, and selecting at least one normal virtual load balancer from the rest LB nodes as a standby virtual load balancer of the current state. In general, the virtual load balancer which is already deployed at a certain time point in each LB node is determined as either a main virtual load balancer or a standby virtual load balancer, and then when the virtual load balancer is selected subsequently and the virtual load balancer is determined as the main virtual load balancer or the standby virtual load balancer according to the state information, the waste of the virtual load balancer which is already deployed can be reduced by the technical scheme, and the virtual load balancer is fully and reasonably utilized.
The LB node 30 configures a management network card 303 of the nano tube ovs-agent32 and the state monitoring module 34, the management network card 303 is connected to the SDN controller 11, the management network card receives preconfiguration information of normal state and abnormal state configured by a user to the LB node 30 in advance through the SDN controller 11, the preconfiguration information includes a predefined value indicating whether any one of the state information of the virtual load balancer, the LB node network connectivity state information or the LB node operation state information is normal, and the LB node 40 executes according to the above scheme. The predefined values determined in the LB node 30 and the LB node 40 may be the same or different, and in turn, may be used as criteria for compliance to be determined as the primary virtual load balancer.
Step S2, the SDN controller 11 generates a switching policy according to the state information of the virtual load balancer in each LB node, and re-selects and/or creates a new primary virtual load balancer in the specified LB node after issuing the switching policy to the specified LB node. In particular, the redetermining the new primary virtual load balancer based on the switching policy may select an already deployed virtual load balancer in the designated LB node, or when there is no virtual load balancer that can be selected in the designated LB node, the SDN controller 11 issues an instruction to create a Tap port to the virtual switch 31a (or the virtual switch 41 a) through the management network card 303 (or the management network card 403) and the ovs-agent32 (or the ovs-agent 42), and creates a new virtual load balancer in the designated LB node through the management network card 303 (or the management network card 403) to use the newly created virtual load balancer as the new primary virtual load balancer.
Applicants specifically designate an LB node as a relative concept, and thus designate an LB node may be any LB node in computer system 200. Applicants will detail one exemplary scenario for switching the virtual load balancer 461 in the LB node 30 originally as the primary virtual load balancer 361 to the virtual load balancer 461 in the LB node 40 as a new primary virtual load balancer based on a switching strategy in conjunction with fig. 3 and 4. Thus, when a new primary virtual load balancer re-determined based on the switching policy is in the LB node 40, the LB node 40 is the designated node, and if the LB node 40 is not suitable to determine the new primary load balancer in a selected or created manner, and if the new primary load balancer is determined in a selected or created manner in the LB node 80 such as in FIG. 5, the LB node 80 is designated as the LB node is identified. Any one of the LB clusters 300 thus conforms to one of the LB nodes determined to be the primary virtual load balancer when specifying the LB node. The state information conforming to the standard determined as the primary virtual load balancer is preferably any one of the state information of the virtual load balancer, the LB node network connectivity state information, or the LB node operation state information. Specifically, the operation state information of the LB node can be determined independently or jointly based on one or more indexes of CPU (Central processing Unit) utilization rate, memory utilization rate or disk utilization rate of the LB node; different weight coefficients can be set for the CPU, the memory and the disk respectively in the process of common determination. The LB node network connectivity status information comprises one or more of Tap port status, intranet physical network card status or virtual switch status, which are determined individually or together.
The status monitoring module 34 independently monitors status information of all deployed virtual load balancers in the LB nodes 30, LB node network connectivity status information and LB node operation status information, and reports the status information to the SDN controller 11 at regular time through the management network card 303 independently deployed on each LB node. The status monitoring module 44 independently monitors status information of all deployed virtual load balancers in the LB nodes 40, LB node network connectivity status information and LB node operation status information, and reports the status information to the SDN controller 11 at regular time through a management network card 403 independently deployed on each LB node. The SDN controller generating a switching policy from the state information of the virtual load balancer in each LB node includes: when any one of the state information of the virtual load balancer, the LB node network connectivity state information or the LB node operation state information is abnormal, an event triggering the SDN controller 11 to generate a switching policy is triggered.
The high availability implementation method further comprises: the SDN controller 11 issues a flow table update notification to the LB node for an event of a flow table in a data forwarding device used to form the east-west data forwarding link in the LB node. After the corresponding relation between the main virtual load equalizer and the standby virtual load equalizer is changed, the forwarding paths of the data stream before and after switching are determined by the technical scheme, and the method is specifically shown in fig. 4.
In this embodiment, step S2 further includes: after the state of the virtual load equalizer in the LB node is switched, the SDN controller 11 issues a flow forwarding policy to the ovs-agent32 (42) in the LB node, and the ovs-agent32 (42) modifies forwarding entries in the flow table 311 (411) according to the flow forwarding policy to delete a flow table matching rule of the main virtual load equalizer in the state before the switching policy is issued to the specified LB node, and writes the flow table matching rule for a new main virtual load equalizer created or selected in the specified LB node, where the flow table matching rule is composed of a Tap port name and an action.
As shown in fig. 4, the virtual load balancer 361 in the LB node 30 serves as a master virtual load balancer before switching, and the virtual load balancer 461 in the LB node 40 serves as a backup virtual load balancer before switching, and constitutes a correspondence relationship of the master/backup virtual load balancers. Before the switch, the forwarding entry in the virtual switch flow table 311a forwards the user's access request and data flow to the VM of the virtual machine to the back-end virtual machine VM of the compute node 60 along the path corresponding to the dashed line in fig. 4 (i.e., the dashed line with the arrow formed by VIP192.168.1.100 pointing to the virtual machine VM of the compute node 60 via the LB node 30). When the primary/backup virtual load balancer is switched, the virtual load balancer 461 serves as a new primary virtual load balancer, and the virtual load balancer 361 serves as a secondary virtual load balancer or when the network connectivity status information and the operation status information of the LB nodes 30 are abnormal, one or more LB nodes with normal network connectivity status information and operation status information are selected again in the LB cluster 300, so as to select or recreate one or more new virtual load balancers to serve as backup virtual load balancers. At this time, the forwarding entry in the virtual switch flow table 411a forwards the user's access request and data flow to the VM of the virtual machine to the back-end virtual machine VM of the compute node 60 along the path corresponding to the dashed line in fig. 4 (i.e., the dashed-dotted line with arrow formed by the VIP192.168.1.100 pointing to the virtual machine VM of the compute node 60 via the LB node 40). In the above procedure, the action in the flow table matching rule in the virtual switch flow table 311a is modified by the den (the action in the flow table matching rule in the virtual switch flow table 311a before the switch is accept), and the action in the flow table matching rule in the virtual switch flow table 411a is modified by the accept (the action in the flow table matching rule in the virtual switch flow table 411a before the switch is den).
Preferably, in this embodiment, the method further includes storing a correspondence formed between the primary virtual load balancer and the backup virtual load balancer in the current state to the SDN controller 11, and updating the correspondence after reselecting and/or creating a new primary virtual load balancer in the designated LB node. As to whether to reselect or create a new virtual load balancer in a given LB node and determine the created virtual load balancer as a new master virtual load balancer according to a subsequent determination policy.
In an embodiment, the foregoing determination strategy is specifically described below.
The LB node with the least created virtual load balancer is screened out by the SDN controller 11, and a new virtual load balancer is created in the LB node with the least created virtual load balancer as a primary virtual load balancer or a backup virtual load balancer.
Referring to fig. 3, if the number of virtual load balancers already deployed in the LB nodes 30 is 3, and the number of virtual load balancers already deployed in the LB nodes 40 is 4, and when the LB node network connection status information and the LB node operation status information of the LB nodes 30 are both normal, it is preferable to construct a new virtual load balancer in the LB nodes 30, so that resources of each LB node can be fully utilized, and it is ensured that the number of virtual load balancers in each LB node tends to be consistent, so that it is more beneficial to form a correspondence relationship of a pair of main/standby virtual load balancers.
When the SDN controller 11 generates an event of a switching policy, the SDN controller 11 selects one backup virtual load balancer which is used as a backup virtual load balancer in a current state and is in a normal state as the LB node to which the backup virtual load balancer belongs from all LB nodes and the least LB nodes of the backup virtual load balancers to create the primary virtual load balancer, then selects and/or creates a plurality of new virtual load balancers from the remaining LB nodes in the normal state as the backup virtual load balancers, and stores a correspondence formed by the backup virtual load balancers and the newly selected and/or created new primary virtual load balancers in the designated LB nodes to the SDN controller 11. Further, after the SDN controller 11 generates the event of the switching policy, the SDN controller 11 creates a new virtual load balancer from all LB nodes that are used as the backup virtual load balancer and the LB node with the smallest primary virtual load balancer as the backup virtual load balancer. As to whether to select an already deployed virtual load balancer from the remaining LB nodes with normal states as a backup virtual load balancer or to create one or more new virtual load balancers as backup virtual load balancers, it may be determined by the SDN controller 11 whether one or several status information of each LB node in the LB cluster 300 is normal.
When the LB cluster 300 includes a plurality of LB nodes (refer to LB nodes 30, 80 and 40 in fig. 5), the problem of redefining the number of primary and backup virtual load balancers in the current state of the LB nodes of the primary virtual load balancers needs to be considered, and the excessive virtual load balancers increase the consumption of resources such as CPU, memory and disk in the designated LB nodes. Of course, the LB node 80 in fig. 5 still adopts a topology similar to that of the LB node 30 or 40, and accesses the Tunnel formed by the intranet switch 50 through the intranet physical network card 811. Therefore, in the embodiment, for whether to create a new virtual load balancer as the main virtual load balancer or select an already deployed virtual load balancer as the main virtual load balancer, the above determination policy may need to be sequentially adopted to avoid wasting the already deployed (already created at this time) virtual load balancers, and also can consider and balance the resource consumption of each LB node. The SDN controller 11 is thereby further advantageously configured to determine, according to the state information of the primary/backup virtual load balancer, in which designated LB node to determine a new primary virtual load balancer, and allow a user to forward, through a forwarding path determined by the newly determined primary virtual load balancer, a data packet and a data flow corresponding to an access request initiated by a virtual machine, to a virtual machine VM located at a back end in the computing node 60.
In summary, in this embodiment, when all virtual load balancers deployed in an LB node are monitored independently by a state monitoring module in the LB node to be abnormal, a new main virtual load balancer can be reselected and/or created by an SDN controller to construct a new main/standby virtual load balancer, so that a user's access request initiated by a virtual machine VM deployed in a computing node is implemented in a computer system such as a cloud platform based on an SDN architecture, and high availability for forwarding a data packet is implemented in a forwarding layer; meanwhile, in this embodiment, since it is not necessary to create and maintain daemons for the primary/backup virtual load balancer (or other virtual load balancers that have been created but are not defined as primary/backup), various types of resource requirements and resource waste required for detecting LB nodes can be significantly reduced; finally, through switching the strategy and determining the strategy, reasonable selection and definition of the main/standby virtual load equalizer are realized, so that the high availability of the virtual load equalizer is finally realized, and the efficient and reliable response of the cloud computing platform to the access request initiated by a user or an administrator at a client (such as a computer and a GUI) is remarkably improved.
Embodiment two:
in connection with fig. 2, a computer system (hereinafter referred to as "system") is also disclosed according to an embodiment of the disclosed high availability implementation method of a virtual load balancer.
In this embodiment, a computer system 200 includes: the control node 10 of the SDN controller 11, a plurality of computing nodes 60 and a plurality of LB nodes of the independent deployment status monitoring modules are deployed, and the computing nodes and the LB nodes are jointly connected to the intranet switch 50. The state monitoring module independently monitors state information of all deployed virtual load balancers in the LB nodes and reports the state information to the SDN controller 11. The SDN controller 11 generates a switching policy according to the state information of the virtual load balancer in each LB node, and re-selects and/or creates a new primary virtual load balancer in the specified LB node after issuing the switching policy to the specified LB node.
As a reasonable modification of the computer system 200 disclosed in this embodiment, the computer system 200A is also disclosed in connection with fig. 6, where the computer system 200A includes a plurality of network nodes 70, 71, or even a greater number of network nodes, and the network nodes deploy at least one LB node. Specifically, the network node 70 deploys the LB node 30, and the network node 71 deploys the LB node 40; even further, the LB node 40 may be stripped from the network node 71 and deployed separately and commonly connected to the Tunnel formed by the intranet switch 50. Although only one compute node 60 is shown in fig. 2 and 6, one of ordinary skill in the art can reasonably predict that the computer system 200 (computer system 200A) can deploy a greater number of compute nodes and deploy one or more virtual machines or containers in each compute node 60.
The system disclosed by the embodiment supports the main stream virtualization platforms such as VMware ESxi, linux KVM and the like, fully plays the advantages of virtualization, realizes quick deployment, batch deployment, mirror image backup and quick recovery, and can flexibly migrate. Meanwhile, abundant load balancing scheduling algorithms are supported, and different algorithms can be adopted according to specific application scenes. The support algorithm includes: polling, weighted polling, minimum connection, weighted minimum connection, random, source address HASH, destination address HASH, source address port HASH, etc. The load balancing algorithm is suitable for 4-7 layers of server load balancing. In addition, application feature-based distribution is supported for tier 7 server load balancing, e.g., HTTP header field-based, content-based, etc. Particularly, the method can more conveniently realize the rapid deployment of virtual network devices (VNFs), support the VxLAN three-layer gateway function, realize the service chain function under the intervention of the SDN controller 11, and hold a plurality of SDN protocols such as netcon (protocol of XML-based network configuration), open Flow and the like. Meanwhile, the computer system 200 and/or the computer system 200A disclosed in the present embodiment may be regarded as a cloud computing platform, a data center, or a cluster server.
The system disclosed in this embodiment has the same parts as those in the first embodiment, and will not be described in detail herein with reference to the first embodiment.
Embodiment III:
referring to fig. 7, the present embodiment discloses an electronic device 500, including: a processor 51, a memory 52 and a computer program stored in the memory 52 and configured to be executed by the processor 51, the processor 51 performing the steps in the high availability implementation of the virtual load balancer as described in embodiment one when executing the computer program.
Specifically, the memory 52 is composed of a plurality of memory units, i.e., memory units 521 to 52j, where the parameter j is a positive integer greater than or equal to two. Both the processor 51 and the memory 52 access a system bus 53. The form of the system bus 53 need not be particularly limited, I 2 The C bus, SPI bus, SCI bus, PCI-e bus, ISA bus, etc., and can be modified as appropriate according to the particular type of electronic device 500 and application scenario requirements. Since the system bus 53 is not the invention point of the present application, no statement is made in the present application. The storage units 521 to 52j may be physical storage units, so that the electronic device 500 is understood as a physical computer or a computer cluster or a cluster server; meanwhile, the storage units 521 to 52j may be virtual storage units, for example, based on a virtual storage space formed by the physical storage device through an underlying virtualization technology, so that the electronic device 500 is configured as a virtual device such as a virtual server or a virtual cluster.
The technical solutions of the same parts of the electronic device 500 in this embodiment as those of the first and/or second embodiments are shown in the first and/or second embodiments, and are not described herein again.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above list of detailed descriptions is only specific to practical embodiments of the present invention, and they are not intended to limit the scope of the present invention, and all equivalent embodiments or modifications that do not depart from the spirit of the present invention should be included in the scope of the present invention.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present disclosure describes embodiments, not every embodiment is provided with a separate embodiment, and that this description is provided for clarity only, and that the disclosure is not limited to the embodiments described in detail below, and that the embodiments described in the examples may be combined as appropriate to form other embodiments that will be apparent to those skilled in the art.

Claims (14)

1. The high availability implementation method of the virtual load equalizer is characterized by comprising the following steps:
S1, independently monitoring state information of all deployed virtual load balancers in each LB node through a state monitoring module independently deployed in each LB node, and reporting the state information to an SDN controller, wherein a daemon is not configured in the virtual load balancers;
s2, the SDN controller generates a switching strategy according to the state information of the virtual load equalizer in each LB node, and re-selects and/or creates a new main virtual load equalizer in the designated LB node after issuing the switching strategy to the designated LB node.
2. The method of claim 1, wherein in step S1, the state monitoring module monitors state information of all deployed virtual load balancers in the LB nodes, LB node network connectivity state information and LB node operation state information independently, and reports the state information, the LB node network connectivity state information and the LB node operation state information to the SDN controller at regular time through a management network card independently deployed at each LB node;
in step S2, the SDN controller generating a switching policy according to the state information of the virtual load balancer in each LB node includes: when any one of the state information of the virtual load equalizer, the LB node network connection state information or the LB node operation state information is abnormal, triggering the SDN controller to generate an event of switching strategy.
3. The high availability implementation method according to claim 1, wherein the step S2 further comprises: the SDN controller issues a flow table update notification to the LB node for events of flow tables in data forwarding devices used to form east-west data forwarding links in the LB node.
4. A high availability implementation according to claim 3, wherein the data forwarding device comprises a virtual network switch or a virtual router, the data forwarding device storing the flow table, the flow table being connected to separate virtual load balancers through separate Tap ports, all virtual load balancers in each LB node being hosted by the status monitoring module in the LB node to which they belong.
5. The high availability implementation method according to claim 3, wherein the step S2 further comprises: and the SDN controller issues a flow forwarding strategy to a ovs-agent in the LB node, the ovs-agent modifies forwarding items in a flow table according to the flow forwarding strategy so as to delete a flow table matching rule of a main virtual load equalizer in a state before the switching strategy is issued to the appointed LB node, and writes the flow table matching rule into a new main virtual load equalizer created or selected in the appointed LB node, wherein the flow table matching rule consists of a Tap port name and an action.
6. The high availability implementation according to any one of claims 2 to 5, wherein before step S1, further comprises: and selecting a normal virtual load balancer from deployed virtual load balancers of one of at least two LB nodes by the SDN controller as a main virtual load balancer of the current state, and selecting at least one normal virtual load balancer from the rest LB nodes as a standby virtual load balancer of the current state.
7. The high availability implementation of claim 6, further comprising: and storing the corresponding relation formed between the main virtual load equalizer and the standby virtual load equalizer in the current state to an SDN controller, and updating the corresponding relation after re-selecting and/or creating a new main virtual load equalizer in the appointed LB node.
8. The high availability implementation of claim 6, wherein re-selecting and/or creating a new primary virtual load balancer in the designated LB node comprises: screening out LB nodes with the least created virtual load balancers by the SDN controller, and creating a new virtual load balancers in the LB nodes with the least created virtual load balancers to serve as a main virtual load balancers or standby virtual load balancers.
9. The high availability implementation of claim 8, further comprising, after re-selecting and/or creating a new primary virtual load balancer in the designated LB node: a new backup virtual load balancer is re-selected and/or created among all LB nodes.
10. The high availability implementation of claim 9, wherein the re-selecting and/or creating a new backup virtual load balancer among all LB nodes comprises:
selecting one LB node which is used as a main virtual load balancer and is least used as a standby virtual load balancer from all LB nodes by the SDN controller, wherein the LB node which is used as the standby virtual load balancer in the current state is in a normal state, and the LB node which is used as the standby virtual load balancer in the current state is used as the main virtual load balancer;
and selecting and/or creating a plurality of new virtual load balancers from the rest LB nodes with normal states as backup virtual load balancers, and storing the corresponding relation formed by the backup virtual load balancers and the newly selected and/or created new main virtual load balancers in the appointed LB nodes to an SDN controller.
11. The method of claim 6, wherein the LB node configures a management network card of the nano tube ovs-agent and the state monitoring module, the management network card is connected to the SDN controller, the management network card receives preconfiguration information of a normal state and an abnormal state configured by a user to each LB node in advance through the SDN controller, the preconfiguration information includes a predefined value indicating whether any one of state information of a virtual load balancer, LB node network connectivity state information, or LB node operation state information is normal.
12. A computer system, comprising:
the method comprises the steps of deploying a control node of the SDN controller, a plurality of computing nodes and a plurality of LB nodes of the independent deployment state monitoring module, wherein the computing nodes and the LB nodes are jointly connected into an intranet switch;
the state monitoring module independently monitors state information of all deployed virtual load balancers in the LB node and reports the state information to the SDN controller, and the virtual load balancers are not configured with daemons;
and the SDN controller generates a switching strategy according to the state information of the virtual load equalizer in each LB node, and re-selects and/or creates a new main virtual load equalizer in the designated LB node after issuing the switching strategy to the designated LB node.
13. The computer system of claim 12, wherein the computer system comprises a number of network nodes, the network nodes deploying at least one LB node.
14. An electronic device, comprising:
a processor, a memory device composed of at least one memory unit, and
a communication bus establishing a communication connection between the processor and the memory device;
the processor is configured to execute one or more programs stored in the storage device to implement the high availability implementation of the load balancer of any of claims 1-11.
CN202110935111.1A 2021-08-16 2021-08-16 High-availability implementation method and system of virtual load equalizer and electronic equipment Active CN113709220B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110935111.1A CN113709220B (en) 2021-08-16 2021-08-16 High-availability implementation method and system of virtual load equalizer and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110935111.1A CN113709220B (en) 2021-08-16 2021-08-16 High-availability implementation method and system of virtual load equalizer and electronic equipment

Publications (2)

Publication Number Publication Date
CN113709220A CN113709220A (en) 2021-11-26
CN113709220B true CN113709220B (en) 2024-03-22

Family

ID=78652751

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110935111.1A Active CN113709220B (en) 2021-08-16 2021-08-16 High-availability implementation method and system of virtual load equalizer and electronic equipment

Country Status (1)

Country Link
CN (1) CN113709220B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114785671A (en) * 2022-05-18 2022-07-22 江苏安超云软件有限公司 Method, system and electronic device for realizing high availability of virtual load balancer

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104780115A (en) * 2014-01-14 2015-07-15 上海盛大网络发展有限公司 Load balancing method and load balancing system in cloud computing environment
CN104935672A (en) * 2015-06-29 2015-09-23 杭州华三通信技术有限公司 High available realizing method and equipment of load balancing service
CN106921553A (en) * 2015-12-28 2017-07-04 中移(苏州)软件技术有限公司 The method and system of High Availabitity are realized in virtual network
CN108063783A (en) * 2016-11-08 2018-05-22 上海有云信息技术有限公司 The dispositions method and device of a kind of load equalizer
CN109937401A (en) * 2016-11-15 2019-06-25 微软技术许可有限责任公司 Via the real-time migration for the load balancing virtual machine that business bypass carries out

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104780115A (en) * 2014-01-14 2015-07-15 上海盛大网络发展有限公司 Load balancing method and load balancing system in cloud computing environment
CN104935672A (en) * 2015-06-29 2015-09-23 杭州华三通信技术有限公司 High available realizing method and equipment of load balancing service
CN106921553A (en) * 2015-12-28 2017-07-04 中移(苏州)软件技术有限公司 The method and system of High Availabitity are realized in virtual network
CN108063783A (en) * 2016-11-08 2018-05-22 上海有云信息技术有限公司 The dispositions method and device of a kind of load equalizer
CN109937401A (en) * 2016-11-15 2019-06-25 微软技术许可有限责任公司 Via the real-time migration for the load balancing virtual machine that business bypass carries out

Also Published As

Publication number Publication date
CN113709220A (en) 2021-11-26

Similar Documents

Publication Publication Date Title
CN110113441B (en) Computer equipment, system and method for realizing load balance
US10534601B1 (en) In-service software upgrade of virtual router with reduced packet loss
US10949233B2 (en) Optimized virtual network function service chaining with hardware acceleration
EP2559206B1 (en) Method of identifying destination in a virtual environment
US8027354B1 (en) Network consolidation for virtualized servers
US8613085B2 (en) Method and system for traffic management via virtual machine migration
JP6466003B2 (en) Method and apparatus for VNF failover
KR102014433B1 (en) System and method for supporting discovery and routing degraded fat-trees in a middleware machine environment
US11153194B2 (en) Control plane isolation for software defined network routing services
WO2016121973A1 (en) Node system, server device, scaling control method, and program
US10404773B2 (en) Distributed cluster processing system and packet processing method thereof
EP2309680A1 (en) Switching API
CN111698158B (en) Method and device for electing master equipment and machine-readable storage medium
US11824765B2 (en) Fast redirect of traffic when pods fail
US20160205033A1 (en) Pool element status information synchronization method, pool register, and pool element
CN114080785A (en) Highly scalable, software defined intra-network multicasting of load statistics
CN111835685A (en) Method and server for monitoring running state of Nginx network isolation space
CN113709220B (en) High-availability implementation method and system of virtual load equalizer and electronic equipment
CN111835684B (en) Network isolation monitoring method and system for haproxy equipment
JP6604336B2 (en) Information processing apparatus, information processing method, and program
Ma et al. A comprehensive study on load balancers for vnf chains horizontal scaling
WO2023207189A1 (en) Load balancing method and system, computer storage medium, and electronic device
US11418382B2 (en) Method of cooperative active-standby failover between logical routers based on health of attached services
US11528222B2 (en) Decentralized control plane
JP7020556B2 (en) Disaster recovery control methods, communication devices, communication systems, and programs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant