CN113709054A - Keepallved-based LVS (Low Voltage differential Signaling) system deployment adjusting method, device and system - Google Patents

Keepallved-based LVS (Low Voltage differential Signaling) system deployment adjusting method, device and system Download PDF

Info

Publication number
CN113709054A
CN113709054A CN202110804895.4A CN202110804895A CN113709054A CN 113709054 A CN113709054 A CN 113709054A CN 202110804895 A CN202110804895 A CN 202110804895A CN 113709054 A CN113709054 A CN 113709054A
Authority
CN
China
Prior art keywords
node
physical server
load balancing
flow distribution
server node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110804895.4A
Other languages
Chinese (zh)
Inventor
杨观止
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan Inspur Data Technology Co Ltd
Original Assignee
Jinan Inspur Data Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan Inspur Data Technology Co Ltd filed Critical Jinan Inspur Data Technology Co Ltd
Priority to CN202110804895.4A priority Critical patent/CN113709054A/en
Publication of CN113709054A publication Critical patent/CN113709054A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides a keepalived-based LVS system deployment adjusting method, which comprises the following steps: keepalived assemblies are installed on load balancing nodes in the built LVS; the load balancing node which executes the flow distribution task at present periodically obtains the current flow distribution weight of each physical server node in the LVS system; if the difference value between the current flow distribution weight of the physical server node and the last stored flow distribution weight is larger than the preset updating writing weight, updating and writing the current flow distribution weight of the physical server node; the invention further provides a device and a system for LVS system deployment adjustment based on keepalive, which effectively improve reliability of LVS system deployment adjustment and adjustment efficiency of load balancing.

Description

Keepallved-based LVS (Low Voltage differential Signaling) system deployment adjusting method, device and system
Technical Field
The invention relates to the field of LVS deployment, in particular to a method, a device and a system for adjusting LVS system deployment based on keepalived.
Background
In a Linux Virtual Server (LVS) system in a DR (Direct Routing) mode, a load balancer dynamically selects an RS Server according to a load condition of each RS (Real Server, physical Server or Real Server), directly modifies a target MAC Address (Media Access Control Address or ethernet Address) of a data frame into an MAC Address of the selected Server without modifying or encapsulating an IP packet, and then transmits the modified data frame on a local area network of a Server group.
Because the MAC address of the data frame is the MAC address of the selected server, the RS server can receive the data frame and obtain the IP packet therefrom, and when the RS server finds that the destination address VIP (Virtual IP) of the packet is on the local network device, the RS server processes the packet and directly returns the corresponding packet to the user according to the routing table.
However, in the process of building the DR LVS system, a single load balancer faces the problem of single point of failure, and once the load balancer goes down, the request from the client cannot be forwarded, so that the whole service is unavailable for the client; moreover, there are many scheduling algorithms implemented by the current LVS, and taking the weighted least connection algorithm (WLC) as an example, the efficiency and the practicability of the weighted least connection algorithm (WLC) are high. The weighted least join algorithm is an improvement of the least join algorithm, and the core of the algorithm is as follows: and setting a weight Wi for each RS server to represent the performance of the RS server, wherein the larger the weight is, the better the performance is. When the load balancer distributes the request, the server node with the minimum ratio of the connection number to the weight value is searched under the condition that Wi is not zero in all RS server linked lists, and if the RS server node cannot be found, NULL (NULL) is returned.
The specific implementation principle of the WLC algorithm is as follows: if a group of RS servers is represented by { S0, S1, S2 …, Sn-1}, w (Si) represents a weight of the RS server node Si, and c (Si) represents a current connection number of the server node Si, then the total current connection number of the RS servers is Csum ∑ c (Si), (i ═ 0,1, 2 …, n-1), and the node selected by the load balancer from the cluster is Sm, then the selected RS server node should satisfy: (c (sm)/Csum)/w (sm) ═ min { (c (si)/Csum)/w (si) }. In the formula, Csum is a constant, so the formula can be simplified to c (sm)/w (sm) ═ min { c (si)/w (si) }. Since the weight of each RS server node in the RS server pool is greater than zero, and the division consumes more performance than the multiplication, the formula can be written as c (sm) w (si) > c (si) w (sm).
Although the WLC algorithm is efficient and practical, it has two disadvantages: 1) the weight of all RS servers is configured by an administrator, and is not determined by the load condition of the node. The weight value configuration mode depending on the experience of the administrator may cause the efficiency of the server to be reduced, and further, the whole server cluster is influenced; 2) after the weight assignment of each RS server is completed, the performance of the nodes with large weights in the cluster may be reduced with the increase of assignment tasks, and at this time, if the weights are not dynamically changed but the assignment tasks are continuously assigned to the RS server nodes, the RS server nodes may be down.
Disclosure of Invention
The invention provides a method, a device and a system for LVS (logical link server) system deployment adjustment based on keepalive in order to solve the problems in the prior art, effectively solves the problems that in the LVS system building process, the risk is high when a load balancing node fails, the scheduling efficiency of a load balancing scheduling algorithm is low and the LVS system is prone to crash due to the prior art, effectively improves the reliability of LVS system deployment adjustment and the adjustment efficiency of load balancing, and avoids the crash of RS server nodes.
The invention provides a keepalived-based LVS system deployment adjusting method, which comprises the following steps:
constructing and deploying an LVS (logical volume server) system, and installing keepalive assemblies on a first load balancing node and a second load balancing node in the LVS system, wherein the first load balancing node and the second load balancing node finish a flow distribution task in a master-standby mode;
the load balancing node which executes the flow distribution task at present periodically obtains the current flow distribution weight of each physical server node in the LVS system;
judging whether the difference value between the current flow distribution weight of the physical server node and the last stored flow distribution weight is larger than a preset updating write weight; if the current flow distribution weight value is larger than the preset flow distribution weight value, updating and writing the current flow distribution weight value of the physical server node into a load balancing node which currently executes a flow distribution task;
and the current load balancing node executing the flow distribution task executes the flow distribution task according to the flow distribution weight of the physical server node which is updated and written last time.
Optionally, constructing and deploying the LVS system, and installing keepalive assemblies on the first load balancing node and the second load balancing node in the constructed LVS system specifically includes:
creating a plurality of virtual machine nodes, wherein a first virtual machine node is a first load balancer node, a second virtual machine node and a third virtual machine node are physical server nodes, and a fourth virtual machine node is a second load balancer node;
respectively configuring target addresses of a first virtual machine node, a second virtual machine node and a third virtual machine node based on an LVS system;
and when the first load balancing node is down, the second load balancing node takes over the traffic distribution task.
Optionally, the current traffic allocation weight of the physical server node is determined by current load information of the physical server node.
Further, the load information of the physical server node includes: the utilization rate of a CPU in the physical server node, the utilization rate of a memory in the physical server node and the utilization rate of network bandwidth in the physical server node.
Further, the determination of the current traffic distribution weight of the physical server node by the current load information of the physical server node is specifically that:
F(Sinew)=A(Kcpu(1-CPU_USEnew)+Kmem(1-MEM_USEnew)+Kband(1-BAND_USEnew))
wherein, F (Sinew) is the current flow distribution weight of the physical server node, CPU _ USEnew is the current utilization rate of the CPU in the physical server node, MEM _ USEnew is the current utilization rate of the memory in the physical server node, BAND _ USEnew is the current utilization rate of the network bandwidth in the physical server node; the method comprises the steps that Kcpu is an influence factor of the utilization rate of a CPU on the weight value distributed by a physical server node, Kmem is an influence factor of the utilization rate of a memory on the weight value distributed by the physical server node, Kband is an influence factor of the utilization rate of network bandwidth on the weight value distributed by the physical server node, the sum of the numerical values of Kcpu, Kmem and Kband is 1, and A is an adjusting reference coefficient for weight value change.
Further, if any one of the current utilization rate of the CPU in the physical server node, the current utilization rate of the memory in the physical server node, and the current utilization rate of the network bandwidth in the physical server node is greater than the preset utilization rate threshold, the current traffic allocation weight of the physical server node is 0.
Optionally, the preset update write weight is determined by load information of the physical server, specifically, is a product of an adjustment reference coefficient of the weight change and a preset update write weight factor, where the preset update write weight factor specifically is:
B=Kcpu*|CPU_USEnew-CPU_USEold|+Kmem*|MEM_USEnew-MEM_USEold|+Kband*|BAND_USEnew-BAND_USEold|
b is a preset updating write weight factor, CPU _ USEnew is the current utilization rate of the CPU in the physical server node, and CPU _ USEold is the utilization rate of the CPU in the corresponding physical server node during the last storage; MEM _ USEnew is the current utilization rate of the memory in the physical server node, and MEM _ USEold is the utilization rate of the memory in the corresponding physical server node during the last storage; BAND _ USEnew is the current utilization rate of the network bandwidth in the physical server node, and BAND _ USEold is the utilization rate of the network bandwidth in the corresponding physical server node when the network bandwidth is stored last time.
Optionally, the load balancing node currently executing the traffic distribution task allocates a weight according to the traffic of the physical server node written by the last update, and the executing the traffic distribution task specifically includes:
a load balancing node executing a flow distribution task at present acquires a flow distribution weight of a physical server node which is updated and written last time;
and determining the physical server node with the maximum traffic distribution weight, and preferentially distributing the traffic distribution task to be executed to the physical server node with the maximum traffic distribution weight.
The invention provides a keepalived-based LVS system deployment adjusting device, which comprises:
the system comprises a building module, a traffic distribution module and a traffic distribution module, wherein the building module builds and deploys an LVS (virtual local server) system, and keepalive assemblies are respectively installed on a first load balancing node and a second load balancing node in the built LVS system, wherein the first load balancing node and the second load balancing node finish a traffic distribution task in a master-slave mode;
the acquiring module is used for periodically acquiring the current flow distribution weight of each physical server node in the LVS by the current load balancing node executing the flow distribution task;
the judging module is used for judging whether the difference value between the current flow distribution weight of the physical server node and the corresponding flow distribution weight stored last time is larger than a preset updating write weight; if the current flow distribution weight value is larger than the preset flow distribution weight value, updating and writing the current flow distribution weight value of the physical server node into a load balancing node which currently executes a flow distribution task;
and the distribution module executes the flow distribution task by the load balancing node which currently executes the flow distribution task according to the flow distribution weight of the physical server node which is updated and written last time.
The third aspect of the present invention provides a keepalived-based LVS system deployment adjusting system, including: the system comprises a first physical server node, a second physical server node, a first load balancing node and a second load balancing node; keepalive components are installed on the first load balancing node and the second load balancing node, wherein the first load balancing node and the second load balancing node complete a flow distribution task in a master-standby mode;
the method comprises the steps that a load balancing node which currently executes a flow distribution task in a first load balancing node and a second load balancing node periodically obtains a current flow distribution weight of each physical server node in the LVS system; judging whether the difference value between the current flow distribution weight of the physical server node and the corresponding flow distribution weight in the last storage is larger than a preset updating write weight; if the current flow distribution weight value is larger than the preset flow distribution weight value, updating and writing the current flow distribution weight value of the physical server node into a load balancing node which currently executes a flow distribution task; and executing a flow distribution task according to the flow distribution weight of the physical server node which is updated and written last time.
The technical scheme adopted by the invention comprises the following technical effects:
1. the method effectively solves the problems that in the prior art, the risk is high when the load balancing nodes are in failure in the LVS system building process, and the scheduling efficiency of the load balancing scheduling algorithm is low and is easy to crash, effectively improves the reliability of LVS system deployment and adjustment and the adjustment efficiency of load balancing, and avoids the crash of RS server nodes.
2. According to the technical scheme, the keepalive assemblies are respectively arranged in the first load balancing node and the second load balancing node, the working modes of the first load balancing node and the second load balancing node are the active/standby mode, when one load balancing node is down, the other load balancing node can be timely arranged, the load balancing reliability of the LVS system is improved, and the user experience is also improved.
3. In the technical scheme of the invention, the load balancing node which executes the flow distribution task at present executes the flow distribution task according to the flow distribution weight of the physical server node which is updated and written last time; the flow distribution weight of the current physical server node is determined by the current load information of the physical server node, and the flow distribution weight can be adjusted according to the current load information of the physical server node, so that when the load balancing node executing the flow distribution task distributes flow, the flow can be dynamically adjusted according to the current load condition of the physical server, and the accuracy of load balancing adjustment is improved.
4. Judging whether the difference value between the current flow distribution weight of the physical server node and the last stored flow distribution weight is greater than a preset updating write weight or not; if the current flow distribution weight value is larger than the preset flow distribution weight value, updating and writing the current flow distribution weight value of the physical server node into a load balancing node which currently executes a flow distribution task; the method and the device realize that whether the current flow distribution weight of the physical server is updated and written into the load balancing node executing the flow distribution task or not is determined according to the load change condition of the physical server node, so that the phenomenon that the new flow distribution weight which is updated and written into all the physical server nodes in each period is avoided, the load balancer is overloaded, and the regulation reliability of the load balancer is improved.
5. According to the technical scheme, if any one of the utilization rate of a CPU in a physical server node, the utilization rate of a memory in the physical server node and the utilization rate of network bandwidth in the physical server node is greater than a preset utilization rate threshold value, the traffic distribution weight of the physical server node is 0, the situation that the physical server node is overloaded or even down is avoided, and the reliability of load balancing adjustment of a load balancer (a load balancing node which is executing a traffic distribution task) is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present invention, the drawings used in the description of the embodiments or prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without any creative effort.
Fig. 1 is a schematic structural diagram of an OSI network model in a first embodiment of the present invention;
fig. 2 is a schematic diagram of a topology structure of an LVS network in a first embodiment of the present disclosure;
FIG. 3 is a schematic flow diagram of a process according to an embodiment of the present invention;
fig. 4 is a schematic flow chart illustrating step S1 in a method according to an embodiment of the present invention;
fig. 5 is a schematic flow chart illustrating step S6 in a method according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of an apparatus according to a second embodiment of the present invention;
fig. 7 is a schematic structural diagram of a third system in an embodiment of the present invention.
Detailed Description
In order to clearly explain the technical features of the present invention, the following detailed description of the present invention is provided with reference to the accompanying drawings. The following disclosure provides many different embodiments, or examples, for implementing different features of the invention. To simplify the disclosure of the present invention, the components and arrangements of specific examples are described below. Furthermore, the present invention may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. It should be noted that the components illustrated in the figures are not necessarily drawn to scale. Descriptions of well-known components and processing techniques and procedures are omitted so as to not unnecessarily limit the invention.
Example one
Osi (open System interconnect), open System interconnect. Generally called the OSI reference model, is the network interconnection model studied by the ISO (international organization for standardization) organization in 1985. To better make network applications more popular, the ISO introduced the OSI reference model, meaning that all companies were recommended to use this specification to control the network. Thus, all companies have the same specifications and are interconnected.
As shown in fig. 1, OSI defines a seven-layer framework of network interconnection (physical layer, data link layer, network layer, transport layer, session layer, presentation layer, application layer), i.e., ISO open interconnection system reference model.
Each layer implements its own functions and protocols and completes interface communications with adjacent layers. The OSI service definition specifies the services provided by the various layers. The service of a certain layer is a capability of the layer and the layers below it, which is provided to the higher layer through an interface.
When a host a wants to send a packet to a host B, the host a encapsulates the packet layer by layer, where the encapsulation is to add a corresponding header to each layer, where the main information added by the transport layer is a source port number and a destination port number, the network layer mainly adds a source IP address and a destination IP address, and the data link layer mainly adds a source MAC address and a destination MAC address. Assuming that host a and host B are not in the same network segment, the following process is performed during transmission:
1) because A only knows the IP Address of B now, does not know the MAC Address of B, and the two-layer equipment exchanger can not forward the data according to the IP Address, so host A will send an ARP (Address Resolution Protocol), an Address Resolution Protocol, a TCP/IP Protocol to obtain the physical Address according to the IP Address at this moment; when the host sends information, the ARP request containing the target IP address is broadcasted to all the hosts on the local area network, and the return message is received, so that the physical address of the target is determined; storing the IP address and the physical address into a local ARP cache after receiving the return message and keeping for a certain time, directly inquiring the ARP cache to save resources when requesting next time), and broadcasting, wherein the source IP address is the IP address of the host A, and the source MAC address is the MAC address of the host A; the target IP address is the IP address of the host B, the target MAC addresses are FF, FF and FF, and the query IP address is the MAC address of the host B.
2) When the switch receives the broadcast frame through port F1, it will forward to all other ports and will add the source MAC address of the frame (i.e., the MAC address of host a) to its MAC address table.
3) After receiving the ARP broadcast frame, the host B compares the ARP broadcast frame with the IP of the host B, and after finding a match, the host B responds to the ARP request frame of the host A by taking the A as a target MAC address.
4) After the switch receives the reply frame of host B through port F2, the switch compares the destination MAC address of the frame with its own MAC address table, finds that the port corresponding to the frame is F1, and forwards the frame to the F1 port (if the destination MAC address of the data frame does not exist in the table, it will be forwarded to all the other ports except the source port), and at the same time adds the source MAC address of the frame (i.e., the MAC address of host B) to its own MAC address table.
5) After receiving the reply frame of the host B, the host A obtains the MAC address of the host B, so that the information is stored in a local ARP cache, and meanwhile, the MAC address of the host B is used as a target address to package data to be transmitted into frames and send the frames.
6) The switch again receives the data frame for host a, finds that the destination MAC address is that of host B, and the port to which that address corresponds is F2, and forwards the data to F2 port.
7) The host B successfully receives the data sent by the host A.
In an actual production use environment, manufacturers often use a server cluster to solve processing of a large number of requests, and due to various factors such as user use habits, load imbalance often occurs in the server cluster. Consider the following scenario:
there are three cells with different work and rest time, which are working in the daytime, working at night and working in idle time. The three cells are respectively bound with a server to use a certain network service. If load balancing is not used, each cell only fixedly uses the server bound with the cell, so that one server is almost idle in one day, waste is caused, and one server always bears huge access pressure, and the risk of downtime is increased.
The LVS (Linux Virtual Server) is a Linux Virtual server, works in a network layer, is integrated into a Linux kernel module at present, and realizes an IP-based data request load balancing scheduling scheme in the Linux kernel. Since it works in the network layer, compared with other load balancing schemes, such as Domain Name System (DNS) Domain Name alternate resolution, application layer load scheduling, client scheduling, etc., the LVS has very high efficiency.
The network topology of the LVS is shown in fig. 2. The load balancing scheduler is the core of the LVS, which internally maintains the real IP addresses (RIP) of all real servers (i.e. physical servers) rs (real server) and exposes only one virtual vip (virtual IP) to the clients for receiving all client requests. After receiving the request from the client, the load balancer distributes the request data packet to a plurality of real servers RS at the rear end through a balancing algorithm, and the RS servers process and respond to the request.
In the packet sent by the client, the source IP is the local IP, and the target IP is the VIP, so the client receives and continues to process the packet only if the source IP and the target IP in the received response packet are VIPs, otherwise the packet is discarded. In order to enable the client and the RS to normally communicate, the LVS has the following three operation modes:
(1) VS/NAT (Virtual Server in Network Address Translation mode):
when a client accesses a VIP, a request message reaches a load balancer, the load balancer selects a server from a group of RSs according to a scheduling algorithm, the target address of the message is rewritten into the address of the selected server, the target port of the message is rewritten into the corresponding port of the selected server, and finally the modified message is sent to the selected server. Meanwhile, a connection table is maintained in the load balancer, the connection is recorded in the table, when the next message of the connection arrives, the address and the port of the originally selected server can be obtained from the connection table, the same rewriting operation is carried out, and the message is transmitted to the originally selected server. When the response message from the real server passes through the load balancer, the overload balancer changes the source address and the source port of the message into the VIP and the corresponding port, and then sends the message to the user.
(2) VS/TUN (Virtual Server via IP Tunneling, implementing Virtual Server through IP tunnel mode) mode:
in the VS/NAT mode, the data of the request and the response need to pass through the load balancer, and when the number of the RSs is large, the load balancer becomes the bottleneck of the whole system. Most network services have a characteristic that data sent by a request message is very little, but a response message often contains a large amount of data, and the bandwidth of a load balancing server is limited, so when the number of RSs is large, a network cable is used by many RIPs to return data, which may cause a bottleneck of system performance.
The VS/TUN mode mainly solves the bottleneck problem that the message forwarding methods are different, a load balancer dynamically selects a server according to the load condition of each RS, encapsulates a request message into another IP message, forwards the encapsulated new IP message to the selected server, the RS decapsulates the message to obtain the message with the original target address of VIP after receiving the message, and the server processes the request when finding that the VIP address is configured on local IP tunnel equipment and directly returns the corresponding message to a user according to a routing table.
In the VS/TUN mode, the target address of the request message is VIP, and the source address of the response message is also VIP, so that the response message can be directly returned to the user without any modification.
(3) VS/DR (Virtual Server via Direct Routing) mode:
compared with the two modes, the message forwarding method in the DR mode is different. The load balancer dynamically selects a server according to the load condition of each RS, does not modify or encapsulate the IP message, but directly modifies the target MAC address of the data frame into the MAC address of the selected server, and then sends the modified data frame on the local area network of the server group.
Because the MAC address of the data frame is the MAC address of the selected server, the RS can receive the data frame and can obtain the IP message, when the server finds that the destination address VIP of the message is on the local network equipment, the server can process the message and then directly return the corresponding message to the user according to the routing table. However, to use this mode, the following two requirements are required: 1. and the load balancer and the RS at the rear end are in the same local area network, otherwise, the load balancer cannot communicate with the RS through the MAC address. 2. Each RS has to configure a VIP internally in order to be able to process the messages from the load balancer.
As shown in fig. 3, the present invention provides a keepalived-based LVS system deployment adjusting method, including:
s1, constructing and deploying an LVS, and installing keepalive assemblies on a first load balancing node and a second load balancing node in the constructed LVS, wherein the first load balancing node and the second load balancing node finish a flow distribution task in a master-slave mode;
s2, the load balancing node which executes the flow distribution task at present periodically obtains the current flow distribution weight of each physical server node in the LVS system;
s3, judging whether the difference value between the current flow distribution weight of the physical server node and the last stored flow distribution weight is larger than the preset updating writing weight, if so, executing the step S4; if the judgment result is no, executing step S5;
s4, updating and writing the current flow distribution weight of the physical server node into the load balancing node which currently executes the flow distribution task;
s5, temporarily not writing the current flow distribution weight of the physical server node into the load balancing node which currently executes the flow distribution task, and waiting for the flow distribution weight of the physical server node in the next period;
and S6, the load balancing node executing the flow distribution task executes the flow distribution task according to the flow distribution weight of the physical server node which is updated and written last time.
As shown in fig. 4, step S1 specifically includes:
s11, creating a plurality of virtual machine nodes, wherein the first virtual machine node is a first load balancer node, the second virtual machine node and the third virtual machine node are physical server nodes, and the fourth virtual machine node is a second load balancer node;
s12, respectively configuring target addresses of a first virtual machine node, a second virtual machine node and a third virtual machine node based on the LVS;
and S13, setting the first load balancing node as a main load balancing node, setting the second load balancing node as a standby load balancing node, and taking over the traffic distribution task by the second load balancing node when the first load balancing node is down.
In step S11, VMware (virtual machine or virtualization service) is used to create three virtual machine nodes, which are named LVS _ DR _ Node01 (hereinafter referred to as Node1, i.e. first virtual machine Node, IP: 10.180.180.12), LVS _ DR _ Node02 (hereinafter referred to as Node2, i.e. second virtual machine Node, IP: 10.180.180.13), and LVS _ DR _ Node03 (hereinafter referred to as Node3, i.e. third virtual machine Node, IP: 10.180.180.14). The node1 is used as a load balancer, namely a first load balancing node, the node2 and the node3 are used as a rear-end real server RS, namely a second virtual machine node and a third virtual machine node are physical server nodes; VMware is used to build an LVS _ DR _ Node04 Node (hereinafter referred to as Node04, i.e. a fourth virtual machine Node, IP: 10.180.180.15), and Node01 and Node04 are used as two highly available load balancers, i.e. a first load balancing Node and a second load balancing Node. The second virtual machine node is used as a first physical server node, and the third virtual machine node is used as a second physical server node. The LVS system under the DR mode is constructed and deployed.
In step S12, a VIP is set in node1 (the first load balancing node), and if the VIP address is 10.180.180.100, a command ifconfigens 33: 110.180.180.100/24 needs to be executed in node1 to implement configuration of an ens33 network card; since a DR model is built, VIPs need to be configured on node2 (the first physical server node) and node3 (the second physical server node), but since the public VIPs are already configured on the first load balancing node1, the VIPs on node2 and node3 should be hidden from the outside (clients) and visible to the inside (physical servers and load balancers). Firstly, after the RS configures the VIP, it is necessary to avoid that the RS responds to the VIP address configured by itself to the outside when receiving an arp request; second, when the RS sends out an ARP request, the source IP address cannot use the VIP address. The above requirements can be achieved by the following two configuration items:
1) and (5) ARP _ ignore, wherein the control system in the RS server judges whether to return an ARP response when receiving an external ARP request. The options for this parameter are as follows:
0 (default): responding to an arp request for the native IP address received on any network card (including looping back the address on the network card) regardless of whether the destination IP is on the receiving network card.
1: only respond to arp requests whose destination IP address is the local address on the receiving network card.
2: only respond to the arp request whose destination IP address is the local address on the receiving network card, and the source IP of the arp request must be in the same network segment as the receiving network card.
3: if the scope (scope) of the local address corresponding to the IP address requested by the ARP request data packet is the host (host), the ARP response data packet is not responded, and if the scope is the global (global) or the link (link), the ARP response data packet is responded.
4-7: left unused.
8: not all arp requests are responded to.
Here, the ARP _ ignore shall be set to 1, that is, only when the destination IP address of the ARP request received by the ens33 network card on RS is the own network card address, the response will be made.
2) arp _ announce: and when the control system sends an arp request to the outside, how to select the source IP address of the arp request data packet. The options for this parameter are as follows:
0 (default): it is allowed to use the IP address on any network card as the source IP of the arp request, which is usually the source IP of packet a.
1: local addresses not belonging to the subnet of the sending network card are avoided as much as possible as the source IP address of the sending arp request.
2: and ignoring the source IP address of the IP data packet, and selecting the most appropriate local address on the sending network card as the source IP address of the arp request.
Here, the arp _ announce should be set to 2, and when the RS sends an arp request out using the ens33 network card, the source IP address uses only the IP address ens33, so that the VIP address is not notified out.
In addition to the above configuration, in order that the RS does not expose the VIP address by sending an arp request, the VIP should be configured on the loopback network card lo, and thus the VIP needs to be configured on the loopback network card lo. In addition, the subnet mask needs to be configured to 255.255.255.255, otherwise, the data sent from the node goes through the lo loopback network card and goes back to the local computer, and cannot be sent out.
After all of the above configurations are completed, LVS configuration is next performed on the first load balancing node 1. In linux, LVS has been integrated at kernel level as an ipv sadm module. Firstly, a command yum install ipv sadm, an ipv sadm module is installed on a first load balancing node1, then a command ipv sadm-A-t 10.180.180.100: 80-srr new load balancing rule is executed, and a scheduling algorithm is designated as polling scheduling. Then, the following commands are executed to add two RSs, and the polling weight of each RS is 1:
ipvsadm-a-t 10.180.180.100:80-r 10.180.180.13-g-w 1
ipvsadm-a-t 10.180.180.100:80-r 10.180.180.14-g-w 1
after the addition, a command ipv sadm is executed to check the addition result, finally, httpd service (the main program of a hypertext transfer protocol (HTTP) server) is started at the corresponding ports of the two RSs, and after different RSs are distinguished on the page content, HTTP://10.180.180.100:80 is input in the browser and is continuously refreshed, and httpd pages from the two RSs can be respectively seen. Executing the command ipv admin-lnc, one can observe the connections that the first load balancing node1 has handled, and the actual addresses to which each connection is forwarded.
Although the above steps complete the construction of the LVS system in the DR mode, a problem is also faced, that is, a single-machine load balancer may face a single point of failure, and once the load balancer goes down, a request from a client cannot be forwarded, so that the entire service is not available to the client, which is unacceptable. Therefore, the load balancer needs to be set to be highly available, when one of the load balancers is down, the other load balancer can be on top of the other load balancer, the load balancer is not aware of the clients, and the whole system can still normally provide services for the clients.
Thus, in step S13, keepalived service components are installed on both first load balancing node01 and second load balancing node 04. Keepalived is the next lightweight level of highly available solution for Linux. The method is the same as a HeartBoard (a component of Linux-HA engineering, which realizes a high-availability cluster system) and a RoseHA (shared storage dual-machine hot standby software) in function, and compared with the HeartBoard, the Keepalived mainly realizes the high-availability function through virtual routing redundancy, although the Keepalived is not powerful in function, the Keepalived is very simple to deploy and use, and all configuration can be completed only by one configuration file.
And then, configuring keepalive. conf files of the first load balancing node01 and the second load balancing node04 respectively to realize virtual routing redundancy protocol configuration, wherein the two load balancers of the first load balancing node01 and the second load balancing node04 exist in a master-slave mode, the first load balancing node01 is a master, and the configuration in the second load balancing node04 is BACKUP, namely, a BACKUP, which indicates that the second load balancing node04 automatically tops up after the first load balancing node01 crashes. virtual _ ipaddress represents configuration of VIP and subnet mask on ens33:1 subinterface, specifically: 10.180.180.100/24dev ens33 label ens33: 1.
The IP of the two real servers are 10.180.180.1380 and 10.180.180.1480 respectively. The HTTP _ GET configuration indicates that keepalive components can carry out health check on two real servers (physical servers) at the back end in the mode, and if one of the keepalive components is down, the keepalive component can timely operate the ipvsadm module to remove the key from the real server list.
After both load balancers (a first load balancing node and a second load balancing node) configure keepalive components, both start keepalive services, and at this time, it can be found that a sub-interface ens:1 is automatically configured on the first load balancing node01, and at the same time, it can also be seen that an ipv sadm configures two real server entries, but the second load balancing node04 is a standby machine, so when the first load balancing node01 is alive, a ens:1 network card is not configured, but a real server entry of the ipv sadm is also configured.
At this time, when the VIP is accessed through a client browser and is continuously refreshed, page contents from node02 and node03 can still be respectively seen, but it can be seen that the nodes are all the first load balancing node01, when the keepalived service in the first load balancing node01 is stopped, it can be seen that the network card of ens33:1 is automatically registered by the second load balancing node04 serving as a standby machine, and a task of flow distribution is taken over, which is the process of building the lv system in the high-availability DR mode based on keepalived.
In step S2, the load balancing node currently executing the traffic distribution task periodically obtains the current traffic distribution weight of each physical server node in the LVS system; the load balancing nodes currently executing the traffic distribution tasks are generally defaulted to be first load balancing nodes, and when the first load balancing nodes are down or have faults, the second load balancing nodes execute the traffic distribution tasks; the obtaining period may be 1min or 1 hour, or may be 1 day, and may be flexibly adjusted according to the actual situation, and the present invention is not limited herein.
The current traffic allocation weight of the physical server node may be determined by current load information of the physical server node. The current flow distribution weight of the physical server node is calculated and determined by the corresponding physical server node according to the current load information, and then the current flow distribution weight is sent to the load balancing node executing the flow distribution task by the physical server node. Specifically, the load information of the physical server node may include: the utilization rate of a CPU in the physical server node, the utilization rate of a memory in the physical server node and the utilization rate of network bandwidth in the physical server node.
Further, the determination of the current traffic distribution weight of the physical server node by the current load information of the physical server node is specifically that:
F(Sinew)=A(Kcpu(1-CPU_USEnew)+Kmem(1-MEM_USEnew)+Kband(1-BAND_USEnew))
wherein, F (Sinew) is the current flow distribution weight of the physical server node, CPU _ USEnew is the current utilization rate of the CPU in the physical server node, MEM _ USEnew is the current utilization rate of the memory in the physical server node, BAND _ USEnew is the current utilization rate of the network bandwidth in the physical server node; the method comprises the steps that Kcpu is an influence factor of the utilization rate of a CPU on the weight value distributed by a physical server node, Kmem is an influence factor of the utilization rate of a memory on the weight value distributed by the physical server node, Kband is an influence factor of the utilization rate of network bandwidth on the weight value distributed by the physical server node, the sum of the numerical values of Kcpu, Kmem and Kband is 1, and A is an adjusting reference coefficient for weight value change.
Preferably, if any one of the current utilization rate of the CPU in the physical server node, the current utilization rate of the memory in the physical server node, and the current utilization rate of the network bandwidth in the physical server node is greater than the preset utilization rate threshold, the current traffic allocation weight of the physical server node is 0.
The adjustment reference coefficient a for weight change can be flexibly set according to actual conditions, for example, when a is set to 10, the value range of f (si) is [0,10 ]. According to the formula, when the utilization rates of the three parameters of the physical server node are all 1, the calculated weight is 0, and at the moment, the load balancer node does not distribute tasks for the real server any more. However, in consideration of the actual server situation, when any one of the three parameters reaches the preset utilization threshold (e.g., 0.9), no matter what the other two parameters are in, the physical server node is considered to be fully loaded, and the current traffic allocation weight of the corresponding physical server node is set to 0, so that the physical server node is prevented from being overloaded or even down.
In steps S3-S5, it is determined whether a difference between the current traffic distribution weight of the physical server node and the traffic distribution weight stored last time is greater than a preset update write weight, where the traffic distribution weight stored last time is the traffic distribution weight corresponding to the last update write to the load balancing node currently executing the traffic distribution task. F (sinew) -f (simld) | ═ a × B. Wherein, f (sinew) is the current traffic distribution weight of the physical server node, and f (siold) is the traffic distribution weight of the corresponding physical server node when it was last stored.
The preset update write weight is determined by load information of the physical server, and specifically is a product of an adjustment reference coefficient a of weight change and a preset update write weight factor B, where the preset update write weight factor B specifically is:
B=Kcpu*|CPU_USEnew-CPU_USEold|+Kmem*|MEM_USEnew-MEM_USEold|+Kband*|BAND_USEnew-BAND_USEold|
the CPU _ USEnew is the current utilization rate of the CPU in the physical server node, and the CPU _ USEold is the utilization rate of the CPU in the corresponding physical server node during the last storage; MEM _ USEnew is the current utilization rate of the memory in the physical server node, and MEM _ USEold is the utilization rate of the memory in the corresponding physical server node during the last storage; BAND _ USEnew is the current utilization rate of the network bandwidth in the physical server node, and BAND _ USEold is the utilization rate of the network bandwidth in the corresponding physical server node when the network bandwidth is stored last time. The value range of B can be between [0,1 ].
It should be noted that, in the embodiment of the present invention, the obtaining, the determining, and the updating and writing of the load balancing node that executes the traffic distribution task are all performed for a single physical server node, that is, the load balancing node that executes the traffic distribution task obtains the current traffic distribution weight of the first physical server node, and performs the determining operation, and if the determination result is yes, the current traffic distribution weight of the first physical server node is written into the load balancing node that executes the traffic distribution task; and the load balancing node executing the flow distribution task acquires the current flow distribution weight of the second physical server node, executes judgment operation, and writes the current flow distribution weight of the second physical server node into the load balancing node executing the flow distribution task if the judgment result is yes. When the load balancing node executing the traffic distribution task performs the acquiring, determining, updating and writing, the load balancing node may perform simultaneous multi-process or multi-thread operation, or may perform single-process or single-thread sequential operation, which is not limited herein.
In step S6, as shown in fig. 5, step S6 specifically includes:
s61, the current load balancing node executing the flow distribution task obtains the flow distribution weight of the physical server node which is updated and written last time;
and S62, determining the physical server node with the maximum traffic distribution weight, and preferentially distributing the traffic distribution task to be executed to the physical server node with the maximum traffic distribution weight.
In step S61, if in steps S3-S5, the current traffic distribution weight update of the physical server node is written into the load balancing node currently executing the traffic distribution task; the flow distribution weight of the physical server node which is updated and written last time is the current flow distribution weight of the physical server node which is updated and written; if the current traffic distribution weight of the physical server node is not updated and written into the load balancing node currently executing the traffic distribution task, the traffic distribution weight of the physical server node which is updated and written last time is the traffic distribution weight of the physical server node which is updated and written (stored) last time into the load balancing node (the load balancing node currently executing the traffic distribution task).
In step S62, a physical server node with the largest traffic allocation weight is determined from the plurality of physical server nodes, and the traffic distribution task to be executed is preferentially allocated to the physical server node with the largest traffic allocation weight.
It should be noted that, in this embodiment, the value taking condition of the preset update weight may be selected according to an actual condition, and if hardware and software resources of a load balancer (a load balancing node currently executing a traffic distribution task) are sufficient or a period for the load balancer to obtain a traffic distribution weight of a physical server node is short, the value of the preset update write weight factor B may be small; if hardware and software resources of a load balancer (a load balancing node which executes a traffic distribution task at present) are insufficient or the period for the load balancer to acquire the traffic distribution weight of the physical server node is long, the value of the preset update write weight factor B can be increased appropriately.
The method effectively solves the problems that in the prior art, the risk is high when the load balancing nodes are in failure in the LVS system building process, and the scheduling efficiency of the load balancing scheduling algorithm is low and is easy to crash, effectively improves the reliability of LVS system deployment and adjustment and the adjustment efficiency of load balancing, and avoids the crash of RS server nodes.
According to the technical scheme, the keepalive assemblies are respectively arranged in the first load balancing node and the second load balancing node, the working modes of the first load balancing node and the second load balancing node are the active/standby mode, when one load balancing node is down, the other load balancing node can be timely arranged, the load balancing reliability of the LVS system is improved, and the user experience is also improved.
In the technical scheme of the invention, the load balancing node which executes the flow distribution task at present executes the flow distribution task according to the flow distribution weight of the physical server node which is updated and written last time; the flow distribution weight of the current physical server node is determined by the current load information of the physical server node, and the flow distribution weight can be adjusted according to the current load information of the physical server node, so that when the load balancing node executing the flow distribution task distributes flow, the flow can be dynamically adjusted according to the current load condition of the physical server, and the accuracy of load balancing adjustment is improved.
Judging whether the difference value between the current flow distribution weight of the physical server node and the last stored flow distribution weight is greater than a preset updating write weight or not; if the current flow distribution weight value is larger than the preset flow distribution weight value, updating and writing the current flow distribution weight value of the physical server node into a load balancing node which currently executes a flow distribution task; the method and the device realize that whether the current flow distribution weight of the physical server is updated and written into the load balancing node executing the flow distribution task or not is determined according to the load change condition of the physical server node, so that the phenomenon that the new flow distribution weight which is updated and written into all the physical server nodes in each period is avoided, the load balancer is overloaded, and the regulation reliability of the load balancer is improved.
According to the technical scheme, if any one of the utilization rate of a CPU in a physical server node, the utilization rate of a memory in the physical server node and the utilization rate of network bandwidth in the physical server node is greater than a preset utilization rate threshold value, the traffic distribution weight of the physical server node is 0, the situation that the physical server node is overloaded or even down is avoided, and the reliability of load balancing adjustment of a load balancer (a load balancing node which is executing a traffic distribution task) is improved.
Example two
As shown in fig. 6, the technical solution of the present invention further provides a keepalived-based LVS system deployment adjusting device, including:
the method comprises the steps that a building module 101 is used for building and deploying an LVS, keepalive assemblies are installed on a first load balancing node and a second load balancing node in the built LVS, wherein the first load balancing node and the second load balancing node finish a flow distribution task in a master-slave mode;
the obtaining module 102 is configured to periodically obtain a current traffic distribution weight of each physical server node in the LVS system, by a load balancing node currently executing a traffic distribution task;
the judging module 103 judges whether a difference value between the current traffic distribution weight of the physical server node and the corresponding traffic distribution weight in the last storage is greater than a preset update write weight; if the current flow distribution weight value is larger than the preset flow distribution weight value, updating and writing the current flow distribution weight value of the physical server node into a load balancing node which currently executes a flow distribution task;
and the distribution module 104 executes the traffic distribution task by the load balancing node currently executing the traffic distribution task according to the traffic distribution weight of the physical server node which is updated and written last time.
If the current flow distribution weight value of the physical server node is updated and written into the load balancing node which currently executes the flow distribution task; the flow distribution weight of the physical server node which is updated and written last time is the current flow distribution weight of the physical server node which is updated and written; if the current traffic distribution weight of the physical server node is not updated and written into the load balancing node currently executing the traffic distribution task, the traffic distribution weight of the physical server node which is updated and written last time is the traffic distribution weight of the physical server node which is updated and written (stored) last time into the load balancing node (the load balancing node currently executing the traffic distribution task).
And determining the physical server node with the maximum traffic distribution weight value from the plurality of physical server nodes, and preferentially distributing the traffic distribution task to be executed to the physical server node with the maximum traffic distribution weight value.
The method effectively solves the problems that in the prior art, the risk is high when the load balancing nodes are in failure in the LVS system building process, and the scheduling efficiency of the load balancing scheduling algorithm is low and is easy to crash, effectively improves the reliability of LVS system deployment and adjustment and the adjustment efficiency of load balancing, and avoids the crash of RS server nodes.
According to the technical scheme, the keepalive assemblies are respectively arranged in the first load balancing node and the second load balancing node, the working modes of the first load balancing node and the second load balancing node are the active/standby mode, when one load balancing node is down, the other load balancing node can be timely arranged, the load balancing reliability of the LVS system is improved, and the user experience is also improved.
In the technical scheme of the invention, the load balancing node which executes the flow distribution task at present executes the flow distribution task according to the flow distribution weight of the physical server node which is updated and written last time; the flow distribution weight of the current physical server node is determined by the current load information of the physical server node, and the flow distribution weight can be adjusted according to the current load information of the physical server node, so that when the load balancing node executing the flow distribution task distributes flow, the flow can be dynamically adjusted according to the current load condition of the physical server, and the accuracy of load balancing adjustment is improved.
Judging whether the difference value between the current flow distribution weight of the physical server node and the last stored flow distribution weight is greater than a preset updating write weight or not; if the current flow distribution weight value is larger than the preset flow distribution weight value, updating and writing the current flow distribution weight value of the physical server node into a load balancing node which currently executes a flow distribution task; the method and the device realize that whether the current flow distribution weight of the physical server is updated and written into the load balancing node executing the flow distribution task or not is determined according to the load change condition of the physical server node, so that the phenomenon that the new flow distribution weight which is updated and written into all the physical server nodes in each period is avoided, the load balancer is overloaded, and the regulation reliability of the load balancer is improved.
According to the technical scheme, if any one of the utilization rate of a CPU in a physical server node, the utilization rate of a memory in the physical server node and the utilization rate of network bandwidth in the physical server node is greater than a preset utilization rate threshold value, the traffic distribution weight of the physical server node is 0, the situation that the physical server node is overloaded or even down is avoided, and the reliability of load balancing adjustment of a load balancer (a load balancing node which is executing a traffic distribution task) is improved.
EXAMPLE III
As shown in fig. 7, the technical solution of the present invention further provides a keepalived-based LVS system deployment adjusting system, including: a first physical server node 201, a second physical server node 202, a first load balancing node 203 and a second load balancing node 204; keepalive components are installed on the first load balancing node 203 and the second load balancing node 204, wherein the first load balancing node 203 and the second load balancing node 204 complete a flow distribution task in a master-slave mode;
the load balancing node currently executing the traffic distribution task in the first load balancing node 203 and the second load balancing node 204 periodically obtains load information of each physical server node in the LVS system; adjusting the flow distribution weight of the physical server nodes according to the load information of each physical server node; determining whether to write the current flow distribution weight of each physical server node into a load balancing node which currently executes a flow distribution task according to the current load information of each physical server node and the load information of the previous period; and executing a flow distribution task according to the flow distribution weight of the physical server node which is updated and written last time.
If the current flow distribution weight value of the physical server node is updated and written into the load balancing node which currently executes the flow distribution task; the flow distribution weight of the physical server node which is updated and written last time is the current flow distribution weight of the physical server node which is updated and written; if the current traffic distribution weight of the physical server node is not updated and written into the load balancing node currently executing the traffic distribution task, the traffic distribution weight of the physical server node which is updated and written last time is the traffic distribution weight of the physical server node which is updated and written (stored) last time into the load balancing node (the load balancing node currently executing the traffic distribution task).
And determining the physical server node with the maximum traffic distribution weight value from the plurality of physical server nodes, and preferentially distributing the traffic distribution task to be executed to the physical server node with the maximum traffic distribution weight value.
The method effectively solves the problems that in the prior art, the risk is high when the load balancing nodes are in failure in the LVS system building process, and the scheduling efficiency of the load balancing scheduling algorithm is low and is easy to crash, effectively improves the reliability of LVS system deployment and adjustment and the adjustment efficiency of load balancing, and avoids the crash of RS server nodes.
According to the technical scheme, the keepalive assemblies are respectively arranged in the first load balancing node and the second load balancing node, the working modes of the first load balancing node and the second load balancing node are the active/standby mode, when one load balancing node is down, the other load balancing node can be timely arranged, the load balancing reliability of the LVS system is improved, and the user experience is also improved.
In the technical scheme of the invention, the load balancing node which executes the flow distribution task at present executes the flow distribution task according to the flow distribution weight of the physical server node which is updated and written last time; the flow distribution weight of the current physical server node is determined by the current load information of the physical server node, and the flow distribution weight can be adjusted according to the current load information of the physical server node, so that when the load balancing node executing the flow distribution task distributes flow, the flow can be dynamically adjusted according to the current load condition of the physical server, and the accuracy of load balancing adjustment is improved.
Judging whether the difference value between the current flow distribution weight of the physical server node and the last stored flow distribution weight is greater than a preset updating write weight or not; if the current flow distribution weight value is larger than the preset flow distribution weight value, updating and writing the current flow distribution weight value of the physical server node into a load balancing node which currently executes a flow distribution task; the method and the device realize that whether the current flow distribution weight of the physical server is updated and written into the load balancing node executing the flow distribution task or not is determined according to the load change condition of the physical server node, so that the phenomenon that the new flow distribution weight which is updated and written into all the physical server nodes in each period is avoided, the load balancer is overloaded, and the regulation reliability of the load balancer is improved.
According to the technical scheme, if any one of the utilization rate of a CPU in a physical server node, the utilization rate of a memory in the physical server node and the utilization rate of network bandwidth in the physical server node is greater than a preset utilization rate threshold value, the traffic distribution weight of the physical server node is 0, the situation that the physical server node is overloaded or even down is avoided, and the reliability of load balancing adjustment of a load balancer (a load balancing node which is executing a traffic distribution task) is improved.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, it is not intended to limit the scope of the present invention, and it should be understood by those skilled in the art that various modifications and variations can be made without inventive efforts by those skilled in the art based on the technical solution of the present invention.

Claims (10)

1. A keepalived-based LVS system deployment adjusting method is characterized by comprising the following steps:
constructing and deploying an LVS (logical volume server) system, and installing keepalive assemblies on a first load balancing node and a second load balancing node in the LVS system, wherein the first load balancing node and the second load balancing node finish a flow distribution task in a master-standby mode;
the load balancing node which executes the flow distribution task at present periodically obtains the current flow distribution weight of each physical server node in the LVS system;
judging whether the difference value between the current flow distribution weight of the physical server node and the last stored flow distribution weight is larger than a preset updating write weight; if the current flow distribution weight value is larger than the preset flow distribution weight value, updating and writing the current flow distribution weight value of the physical server node into a load balancing node which currently executes a flow distribution task;
and the current load balancing node executing the flow distribution task executes the flow distribution task according to the flow distribution weight of the physical server node which is updated and written last time.
2. The keepalived-based LVS system deployment adjusting method according to claim 1, wherein the building of the deployment LVS system, and the installation of the keepalived assemblies on both the first load balancing node and the second load balancing node in the built LVS system specifically comprises:
creating a plurality of virtual machine nodes, wherein a first virtual machine node is a first load balancer node, a second virtual machine node and a third virtual machine node are physical server nodes, and a fourth virtual machine node is a second load balancer node;
respectively configuring target addresses of a first virtual machine node, a second virtual machine node and a third virtual machine node based on an LVS system;
and when the first load balancing node is down, the second load balancing node takes over the traffic distribution task.
3. The keepalived-based LVS system deployment adjustment method according to claim 1, wherein a current traffic distribution weight of the physical server node is determined by current load information of the physical server node.
4. The keepalived-based LVS system deployment adjustment method according to claim 3, wherein the load information of the physical server nodes includes: the utilization rate of a CPU in the physical server node, the utilization rate of a memory in the physical server node and the utilization rate of network bandwidth in the physical server node.
5. The keepalived-based LVS system deployment adjustment method according to claim 4, wherein the determination of the current traffic distribution weight of the physical server node from the current load information of the physical server node is specifically:
F(Sinew)=A(Kcpu(1-CPU_USEnew)+Kmem(1-MEM_USEnew)+Kband(1-BAND_USEnew))
wherein, F (Sinew) is the current flow distribution weight of the physical server node, CPU _ USEnew is the current utilization rate of the CPU in the physical server node, MEM _ USEnew is the current utilization rate of the memory in the physical server node, BAND _ USEnew is the current utilization rate of the network bandwidth in the physical server node; the method comprises the steps that Kcpu is an influence factor of the utilization rate of a CPU on the weight value distributed by a physical server node, Kmem is an influence factor of the utilization rate of a memory on the weight value distributed by the physical server node, Kband is an influence factor of the utilization rate of network bandwidth on the weight value distributed by the physical server node, the sum of the numerical values of Kcpu, Kmem and Kband is 1, and A is an adjusting reference coefficient for weight value change.
6. The keepalive-based LVS system deployment adjusting method of claim 5, wherein if any one of a current utilization rate of a CPU in the physical server node, a current utilization rate of a memory in the physical server node, and a current utilization rate of a network bandwidth in the physical server node is greater than a preset utilization rate threshold, a current traffic distribution weight of the physical server node is 0.
7. The keepalive-based deployment adjustment method for the LVS system according to claim 5, wherein the preset update write weight is determined by load information of the physical server, specifically, is a product of an adjustment reference coefficient of the weight change and a preset update write weight factor, where the preset update write weight factor specifically is:
B=Kcpu*|CPU_USEnew-CPU_USEold|+Kmem*|MEM_USEnew-MEM_USEold|+Kband*|BAND_USEnew-BAND_USEold|
b is a preset updating write weight factor, CPU _ USEnew is the current utilization rate of the CPU in the physical server node, and CPU _ USEold is the utilization rate of the CPU in the corresponding physical server node during the last storage; MEM _ USEnew is the current utilization rate of the memory in the physical server node, and MEM _ USEold is the utilization rate of the memory in the corresponding physical server node during the last storage; BAND _ USEnew is the current utilization rate of the network bandwidth in the physical server node, and BAND _ USEold is the utilization rate of the network bandwidth in the corresponding physical server node when the network bandwidth is stored last time.
8. The keepalive-based deployment adjustment method for the LVS system according to claim 1, wherein the load balancing node currently executing the traffic distribution task performs the traffic distribution task specifically according to the traffic distribution weight of the physical server node that is written in by the latest update:
a load balancing node executing a flow distribution task at present acquires a flow distribution weight of a current physical server node which is updated and written last time;
and determining the physical server node with the maximum traffic distribution weight, and preferentially distributing the traffic distribution task to be executed to the physical server node with the maximum traffic distribution weight.
9. A KEEPALIVED-based LVS system deployment adjusting device is characterized by comprising:
the system comprises a building module, a traffic distribution module and a traffic distribution module, wherein the building module builds and deploys an LVS (virtual local server) system, and keepalive assemblies are respectively installed on a first load balancing node and a second load balancing node in the built LVS system, wherein the first load balancing node and the second load balancing node finish a traffic distribution task in a master-slave mode;
the acquiring module is used for periodically acquiring the current flow distribution weight of each physical server node in the LVS by the current load balancing node executing the flow distribution task;
the judging module is used for judging whether the difference value between the current flow distribution weight of the physical server node and the corresponding flow distribution weight stored last time is larger than a preset updating write weight; if the current flow distribution weight value is larger than the preset flow distribution weight value, updating and writing the current flow distribution weight value of the physical server node into a load balancing node which currently executes a flow distribution task;
and the distribution module executes the flow distribution task by the load balancing node which currently executes the flow distribution task according to the flow distribution weight of the physical server node which is updated and written last time.
10. A KEEPALIVED-based LVS system deployment adjusting system is characterized by comprising: the system comprises a first physical server node, a second physical server node, a first load balancing node and a second load balancing node; keepalive components are installed on the first load balancing node and the second load balancing node, wherein the first load balancing node and the second load balancing node complete a flow distribution task in a master-standby mode;
the method comprises the steps that a load balancing node which currently executes a flow distribution task in a first load balancing node and a second load balancing node periodically obtains a current flow distribution weight of each physical server node in the LVS system; judging whether the difference value between the current flow distribution weight of the physical server node and the corresponding flow distribution weight in the last storage is larger than a preset updating write weight; if the current flow distribution weight value is larger than the preset flow distribution weight value, updating and writing the current flow distribution weight value of the physical server node into a load balancing node which currently executes a flow distribution task; and executing a flow distribution task according to the flow distribution weight of the physical server node which is updated and written last time.
CN202110804895.4A 2021-07-16 2021-07-16 Keepallved-based LVS (Low Voltage differential Signaling) system deployment adjusting method, device and system Pending CN113709054A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110804895.4A CN113709054A (en) 2021-07-16 2021-07-16 Keepallved-based LVS (Low Voltage differential Signaling) system deployment adjusting method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110804895.4A CN113709054A (en) 2021-07-16 2021-07-16 Keepallved-based LVS (Low Voltage differential Signaling) system deployment adjusting method, device and system

Publications (1)

Publication Number Publication Date
CN113709054A true CN113709054A (en) 2021-11-26

Family

ID=78648703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110804895.4A Pending CN113709054A (en) 2021-07-16 2021-07-16 Keepallved-based LVS (Low Voltage differential Signaling) system deployment adjusting method, device and system

Country Status (1)

Country Link
CN (1) CN113709054A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114448985A (en) * 2021-12-28 2022-05-06 中国电信股份有限公司 Flow distribution method, system, electronic equipment and readable medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080082227A (en) * 2007-03-08 2008-09-11 (주)에임투지 Request proportion apparatus in load balancing system and load balancing method
CN101815033A (en) * 2010-03-12 2010-08-25 成都市华为赛门铁克科技有限公司 Method, device and system for load balancing
CN102185779A (en) * 2011-05-11 2011-09-14 田文洪 Method and device for realizing data center resource load balance in proportion to comprehensive allocation capability
CN102244685A (en) * 2011-08-11 2011-11-16 中国科学院软件研究所 Distributed type dynamic cache expanding method and system supporting load balancing
CN103618778A (en) * 2013-11-21 2014-03-05 上海爱数软件有限公司 System and method for achieving data high concurrency through Linux virtual host
CN106815059A (en) * 2016-12-31 2017-06-09 广州勤加缘科技实业有限公司 Linux virtual server LVS automates O&M method and operational system
WO2017133291A1 (en) * 2016-02-02 2017-08-10 华为技术有限公司 Server cluster-based message generation method and load balancer
WO2018077238A1 (en) * 2016-10-27 2018-05-03 贵州白山云科技有限公司 Switch-based load balancing system and method
CN108667878A (en) * 2017-03-31 2018-10-16 北京京东尚科信息技术有限公司 Server load balancing method and device, storage medium, electronic equipment
US20180343228A1 (en) * 2016-02-02 2018-11-29 Huawei Technologies Co., Ltd. Packet Generation Method Based on Server Cluster and Load Balancer
CN111641719A (en) * 2020-06-02 2020-09-08 山东汇贸电子口岸有限公司 Intranet type load balancing implementation method based on Openstack and storage medium
CN112199199A (en) * 2020-11-17 2021-01-08 广州珠江数码集团股份有限公司 Server load balancing distribution method
CN112866132A (en) * 2020-12-31 2021-05-28 网络通信与安全紫金山实验室 Dynamic load balancer and method for massive identification

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080082227A (en) * 2007-03-08 2008-09-11 (주)에임투지 Request proportion apparatus in load balancing system and load balancing method
CN101815033A (en) * 2010-03-12 2010-08-25 成都市华为赛门铁克科技有限公司 Method, device and system for load balancing
CN102185779A (en) * 2011-05-11 2011-09-14 田文洪 Method and device for realizing data center resource load balance in proportion to comprehensive allocation capability
CN102244685A (en) * 2011-08-11 2011-11-16 中国科学院软件研究所 Distributed type dynamic cache expanding method and system supporting load balancing
CN103618778A (en) * 2013-11-21 2014-03-05 上海爱数软件有限公司 System and method for achieving data high concurrency through Linux virtual host
WO2017133291A1 (en) * 2016-02-02 2017-08-10 华为技术有限公司 Server cluster-based message generation method and load balancer
US20180343228A1 (en) * 2016-02-02 2018-11-29 Huawei Technologies Co., Ltd. Packet Generation Method Based on Server Cluster and Load Balancer
WO2018077238A1 (en) * 2016-10-27 2018-05-03 贵州白山云科技有限公司 Switch-based load balancing system and method
CN107995123A (en) * 2016-10-27 2018-05-04 贵州白山云科技有限公司 A kind of SiteServer LBS and method based on interchanger
CN106815059A (en) * 2016-12-31 2017-06-09 广州勤加缘科技实业有限公司 Linux virtual server LVS automates O&M method and operational system
CN108667878A (en) * 2017-03-31 2018-10-16 北京京东尚科信息技术有限公司 Server load balancing method and device, storage medium, electronic equipment
CN111641719A (en) * 2020-06-02 2020-09-08 山东汇贸电子口岸有限公司 Intranet type load balancing implementation method based on Openstack and storage medium
CN112199199A (en) * 2020-11-17 2021-01-08 广州珠江数码集团股份有限公司 Server load balancing distribution method
CN112866132A (en) * 2020-12-31 2021-05-28 网络通信与安全紫金山实验室 Dynamic load balancer and method for massive identification

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114448985A (en) * 2021-12-28 2022-05-06 中国电信股份有限公司 Flow distribution method, system, electronic equipment and readable medium

Similar Documents

Publication Publication Date Title
US10171567B2 (en) Load balancing computer device, system, and method
CN109032755B (en) Container service hosting system and method for providing container service
CN111464592A (en) Load balancing method, device, equipment and storage medium based on microservice
US6397260B1 (en) Automatic load sharing for network routers
US11509581B2 (en) Flow-based local egress in a multisite datacenter
CN112042170B (en) DHCP implementation on nodes for virtual machines
US20210051211A1 (en) Method and system for image pulling
CN112333017B (en) Service configuration method, device, equipment and storage medium
CN110830574B (en) Method for realizing intranet load balance based on docker container
CN115086330A (en) Cross-cluster load balancing system
CN115686729A (en) Container cluster network system, data processing method, device and computer program product
CN112165502A (en) Service discovery system, method and second server
WO2023088924A1 (en) Prioritizing data replication packets in cloud environment
CN114143258B (en) Service agent method based on Open vSwitch under Kubernetes environment
CN113709054A (en) Keepallved-based LVS (Low Voltage differential Signaling) system deployment adjusting method, device and system
CN113254148A (en) Virtual machine migration method and cloud management platform
CN112637265A (en) Equipment management method, device and storage medium
CN114024971B (en) Service data processing method, kubernetes cluster and medium
CN115567383A (en) Network configuration method, host server, device, and storage medium
CN116192855A (en) Load balancing method, load balancing device, electronic equipment and computer readable storage medium
Zhang et al. Linux virtual server clusters
WO2018129957A1 (en) Vbng system multi-virtual machine load sharing method and vbng system device
CN114157708B (en) Control method and device for session migration and vBRAS
CN114466011B (en) Metadata service request method, device, equipment and medium
CN116684244A (en) Dual-machine high availability implementation method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination