CN113691389A - Configuration method of load balancer, server and storage medium - Google Patents

Configuration method of load balancer, server and storage medium Download PDF

Info

Publication number
CN113691389A
CN113691389A CN202110758268.1A CN202110758268A CN113691389A CN 113691389 A CN113691389 A CN 113691389A CN 202110758268 A CN202110758268 A CN 202110758268A CN 113691389 A CN113691389 A CN 113691389A
Authority
CN
China
Prior art keywords
virtual
virtual server
load balancer
server
network card
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110758268.1A
Other languages
Chinese (zh)
Inventor
陈楼
姚欣伟
闫金龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Aijieyun Technology Co ltd
Original Assignee
Shenzhen Aijieyun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Aijieyun Technology Co ltd filed Critical Shenzhen Aijieyun Technology Co ltd
Priority to CN202110758268.1A priority Critical patent/CN113691389A/en
Publication of CN113691389A publication Critical patent/CN113691389A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0876Aspects of the degree of configuration automation
    • H04L41/0886Fully automatic configuration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0893Assignment of logical groups to network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing

Abstract

The embodiment of the invention relates to the technical field of computers, and discloses a configuration method of a load balancer, a server and a storage medium. The invention discloses a configuration method of a load balancer, which is applied to a computing node, wherein the computing node is in communication connection with the load balancer, and at least two virtual servers are deployed on the computing node, and the method comprises the following steps: for each virtual server, the following operations are performed: deploying a corresponding agent program on the virtual server, and binding the agent program with the serial port equipment of the virtual server; and connecting the socket file on the computing node with the serial port equipment to form a communication channel between the computing node and the virtual server, so that the virtual server is configured as a back-end server of the load balancer by the agent program. By adopting the embodiment of the application, the load balancer can be rapidly configured, and the performance of the load balancer is improved.

Description

Configuration method of load balancer, server and storage medium
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to a configuration method of a load balancer, a server and a storage medium.
Background
A cloud platform refers to a service based on hardware resources and software resources for providing computing, networking, and storage capabilities. The cloud platform can comprise a control node, a network node and a computing node, wherein a plurality of virtual servers are deployed in the computing node and are used for providing network services for tenants. In a cloud platform, a load balancer usually adopts a Linux Virtual Server (LVS) technology, and the LVS has four working modes in total, which are in sequence from high to low according to performance: direct Routing (DR), Tunnel (TUN), Network Address Translation (NAT), and Full NAT (Full NAT). Except for fullmat, the other three modes with higher performance all require configuration at the back end rs (real server). Typically, the virtual Server on a compute node is a Real Server (RS). At present, only a tenant has access and modification permission to the RS, so that the tenant can configure the virtual server according to the document in the traditional mode; or the tenant provides the account and the password of the virtual server, and the operation and maintenance personnel configure the virtual server. The method for the tenant to configure the virtual server by itself is inefficient, and because the tenant is not an operation and maintenance person, configuration errors are easy, and the load balancer cannot be used normally. The manner in which the tenant provides the account and the password of the virtual server easily causes leakage of the account and the password of the virtual server of the tenant, and is not beneficial to protecting the security of the virtual server rented by the tenant. In order to solve the problem, a direct routing mode in the LVS can be realized by introducing a third-party mechanism to modify a source IP and a destination IP of a packet and newly adding a corresponding forwarding rule, or a NAT mode in the LVS can be realized by introducing a third-party mechanism to modify a destination MAC of a packet and newly adding a corresponding forwarding rule.
However, since two-layer or three-layer addresses of data packets entering and exiting the backend RS need to be modified, the two methods are inferior to the conventional manual configuration method in performance of the load balancer. And because the modification rule of the issued data packet needs to be additionally carried out, the problem that the interface of the back-end RS is changed or the packet loss and even the connection is interrupted after the back-end RS is subjected to thermal migration is caused. Meanwhile, the modification rule of the data packet needs to be configured by background service, and new management work can be added for the restart of the background service, the restart of the physical node and the version change of the modification rule of the data packet; in addition, the switch is also required to support a function of modifying the three-layer or four-layer address of the packet. As can be seen, in the cloud platform environment, the configuration method of configuring the virtual server in the computing node as the back-end server of the load balancer is cumbersome, and the configured load balancer has poor performance.
Disclosure of Invention
An object of embodiments of the present invention is to provide a method, a server, and a storage medium for configuring a load balancer, which can quickly configure the load balancer and improve the performance of the load balancer.
In order to solve the foregoing technical problem, in a first aspect, an embodiment of the present invention provides a method for configuring a load balancer, where the method is applied to a computing node, the computing node is in communication connection with the load balancer, and at least two virtual servers are deployed on the computing node, and the method includes: for each virtual server, the following operations are performed: deploying a corresponding agent program on the virtual server, and binding the agent program with the serial port equipment of the virtual server; and connecting the socket file on the computing node with the serial port equipment to form a communication channel between the computing node and the virtual server, so that the virtual server is configured as a back-end server of the load balancer by the agent program.
In a second aspect, an embodiment of the present invention further provides a server, including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the configuration method of the load balancer.
In a third aspect, the embodiment of the present invention further provides a computer-readable storage medium, which stores a computer program, and the computer program, when executed by a processor, implements the configuration method of the load balancer described above.
In the embodiment of the invention, a corresponding agent program is deployed on the virtual server, and the agent program is bound with the serial port equipment of the virtual server; and connecting the serial port equipment in the virtual server with the socket file in the virtual server to form a communication channel between the computing node and the virtual server. After the computing node receives an instruction for updating the virtual server, the computing node writes data through the socket file, and the serial port device is bound with the agent program, so that the agent program can read the data in the socket file through the serial port device, and further the agent program can configure the virtual server; based on the above, the computing node can issue an instruction to the agent program, and the agent program configures the virtual server. The virtual server can be directly configured as the back-end server of the load balancer by the agent program, so that an account and a password corresponding to the virtual server do not need to be acquired, the safety of the tenant information corresponding to the virtual server is ensured, meanwhile, manual configuration is not needed, and the configuration efficiency of the load balancer is improved. Because the data packet does not need to be modified and the forwarding rule does not need to be newly added, the configuration steps are simplified, the probability of packet loss is reduced, and the performance of the load balancer is improved.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
FIG. 1 is a flow chart of a method of load balancing in an embodiment of the present application;
FIG. 2 is a flow diagram that illustrates configuring a virtual server in a DR mode in a method for load balancing, according to an embodiment;
FIG. 3 is an interaction diagram between components of the corresponding method of load balancing of FIG. 2;
FIG. 4 is a diagram illustrating a DR pattern in a method of load balancing according to one embodiment;
FIG. 5 is a flow diagram that illustrates configuring virtual IP addresses for virtual servers in a method for load balancing, according to an embodiment;
FIG. 6 is a flow diagram that illustrates detecting kernel parameters in the virtual server in a method for load balancing, according to an embodiment;
fig. 7 is a flowchart illustrating detecting a parameter of a loopback card in the virtual server in the method for load balancing according to an embodiment;
FIG. 8 is a flow diagram that illustrates persistently maintaining virtual IP addresses in a method of load balancing in one embodiment;
FIG. 9 is a flow diagram illustrating detection of a loopback network card in a method for load balancing according to an embodiment;
FIG. 10 is a flow diagram that illustrates the cancellation of load balancing configuration in a method for load balancing in one embodiment;
FIG. 11 is an interaction diagram that illustrates components of the method for load balancing, according to one embodiment;
fig. 12 is a schematic structural diagram of a server in the embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present application in various embodiments of the present invention. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments. The following embodiments are divided for convenience of description, and should not constitute any limitation to the specific implementation manner of the present invention, and the embodiments may be mutually incorporated and referred to without contradiction.
The load balancing method in the embodiment of the application is applied to computing nodes in a cloud platform, the cloud platform can comprise control nodes, network nodes and computing nodes, and the computing nodes are respectively in communication connection with the network nodes and the control nodes. The cloud platform can comprise at least two computing nodes, wherein each computing node is provided with at least 2 virtual servers, and the virtual servers are used for providing network services for tenants. The network node is configured to support communication between the virtual servers, for example: the network node provides a switch function, a routing function and the like, the control node is used for recording routing information, switch information and the like in the network node, and the control node is also used for recording information of a computing node, such as information of the computing node where the virtual server is located.
In the load balancing method in the embodiment of the present application, a processing flow for each virtual server is as shown in fig. 1:
step 101: and deploying a corresponding agent program on the virtual server, and binding the agent program with the serial port equipment of the virtual server.
Specifically, a corresponding agent is deployed for the virtual server, the agent is a background service running inside a Guest OS of the virtual server, and the Guest OS represents a system in the virtual server. And deploying the agent program, and binding the agent program with the serial port equipment of the virtual server. Since the operating systems and versions of the virtual servers are different, the agent is configured to adapt to different operating systems and versions, such as: windows, Linux, UNIX, etc.
The agent program is adaptive to different operating systems and versions, so that the use scene of the agent program is wide, the agent program only needs to be adaptive once in advance, and the subsequent operation and maintenance work is reduced.
Step 102: and connecting the socket file on the computing node with the serial port equipment in the virtual server to form a communication channel between the computing node and the virtual server, so that the agent program can configure the virtual server as a back-end server of a load balancer.
Specifically, a serial device in the virtual server is connected with a Socket file of the computing node to form a communication channel for communication between the computing node and the virtual server. The virtual server and the computing nodes can realize communication without depending on a network, if the computing nodes write commands into the Socket files, the agent program in the virtual server can obtain the commands issued by the computing nodes by reading the serial port equipment, and after the agent program obtains the commands, the agent program can call corresponding codes to execute operations corresponding to the commands. After the agent program executes the corresponding operation, the result of the operation can be written back to the serial device, and the operation result is read by the computing node through the Socket file. Based on the configuration, the computing node can send an instruction for configuring the back-end server to the virtual server through a communication channel between the computing node and the virtual server, the agent program performs configuration operation on the virtual server, the virtual server in the computing node is configured as the back-end server of the load balancer, and a configuration result is returned to the computing node.
In the embodiment of the invention, a corresponding agent program is deployed on the virtual server, and the agent program is bound with the serial port equipment of the virtual server; and connecting the serial port equipment in the virtual server with the socket file in the virtual server to form a communication channel between the computing node and the virtual server. After the computing node receives an instruction for updating the virtual server, the computing node writes data through the socket file, and the serial port device is bound with the agent program, so that the agent program can read the data in the socket file through the serial port device, and further the agent program can configure the virtual server; based on the above, the computing node can issue an instruction to the agent program, and the agent program configures the virtual server. The virtual server can be directly configured as the back-end server of the load balancer by the agent program, so that an account and a password corresponding to the virtual server do not need to be acquired, the safety of the tenant information corresponding to the virtual server is ensured, meanwhile, manual configuration is not needed, and the configuration efficiency of the load balancer is improved. Because the data packet does not need to be modified and the forwarding rule does not need to be newly added, the configuration steps are simplified, the probability of packet loss is reduced, and the performance of the load balancer is improved.
In one embodiment, after the communication channel between the compute node and the virtual server is formed, the process of the agent configuring the virtual server as the back-end server of the load balancer is shown in fig. 2. After steps 101 and 102 are performed in fig. 2, step 103 is performed.
Step 103: and if the agent program acquires the configuration request issued by the computing node through the communication channel, the agent program configures the virtual server as a back-end server of the load balancer according to the configuration request, wherein the configuration request comprises the information that the virtual server is in a direct routing mode.
Specifically, when a tenant of the cloud platform adds a virtual server on a load balancer configuration page through a control interface provided by the control node, the control node notifies the network node of updating configuration according to the operation of the control interface, and notifies the computing node to send a configuration request to the agent program after the network node completes the updating configuration, wherein the configuration request is used for indicating that the virtual server is configured as a backend server of the load balancer. The configuration request includes information that the virtual server is in a direct routing mode, and the configuration request may further include: an instruction to add a virtual IP address is indicated. The process of the agent to configure the virtual server as a back-end server for the load balancer is described below with specific interaction fig. 3.
The tenant selects a virtual server in the cloud platform to add as a forwarding group through the control interface, namely selects the virtual server to add as a back-end server of the load balancer, and is used for providing services for user traffic.
Correspondingly, the control node executes the following steps in sequence:
step S0: and receiving information for adding the virtual server as a forwarding group.
Step S1: and verifying the validity of the information and storing the added information in a local database.
Specifically, verifying the validity of the information may include verifying an operation authority of the tenant to determine that the tenant has an authority to add the forwarding group, or verifying whether the virtual server to be added is available, and it can be understood that specific verification contents may be set according to actual service requirements, which is not limited by the present invention.
Step S2: and returning an adding result to the tenant through the control interface so as to respond to the adding operation of the tenant.
For example, information of "submission success" or "operation success" is displayed on the control interface.
Step S3: and sending an update request of the load balancing table to the network node of the tenant.
Specifically, the control node may send information of the virtual server to be added to a load balancer deployed on the network node, and instruct to update the virtual server to be added to a load balancing table, where the load balancing table records information of backend servers that can be used by the load balancer, in other words, when the load balancer forwards the user traffic, the load balancer selects a backend server that processes the user traffic from the backend servers recorded in the load balancing table based on a load balancing policy. In one implementation, the information of the virtual servers to be added, which is sent by the control node to the network node, includes virtual IP addresses of the virtual servers to be added, where the load balancer may forward the user traffic to the corresponding backend server based on the virtual IP addresses. In another implementation, the virtual IP corresponding to each virtual server to be added is generated by a load balancer on the network node and synchronized to the control node.
In one implementation, the control node may send an update request to the network node via a Remote Procedure Call (RPC).
Correspondingly, after receiving the update request sent by the control node, the network node sequentially executes the following steps:
step S4: and updating the load balancing table, and completing corresponding deployment updating of load balancing by updating the keepalived configuration file and reloading the keepalived process.
Step S5: and returning the deployment result of load balancing to the control node.
In one implementation, the deployment result returned by the network node includes the virtual IP address corresponding to each virtual server to be added.
Correspondingly, after receiving the deployment result returned by the network node, the control node sequentially executes the following steps:
step S6, updating the database.
Specifically, the control node may update the record saved in step S1 according to the deployment result. Such as updating the virtual IP corresponding to the virtual server to be added, updating the state, and the like.
Step S7: the compute node is notified to send a configuration request to the agent.
Specifically, the control node determines the computing node where each virtual server to be added is located, and by calling the API of the computing service, the API of the computing service notifies the nova-computer service of each computing node, and the nova-computer service of each computing node sends a configuration request to the serial device of the virtual server through the Socket file.
The configuration request may include a virtual IP address corresponding to each virtual server to be added.
Correspondingly, after the agent program on the computing node receives the configuration request through the serial device, the following steps are sequentially executed:
step S8 is executed: and the agent program reads the configuration request through the serial port equipment.
As described above, since the agent program is bound to the serial device of the virtual server, the agent program can read the configuration request through the serial device, and configure the virtual server as the backend server of the load balancer.
Step S9: and returning a configuration result to the control node.
Specifically, after configuring the virtual server as a back-end server of the load balancer, the agent may transmit the configuration result to the socket file through the serial device, the computing node obtains the configuration result in the socket, and the computing node feeds back the configuration result to the control node, where the configuration result is sent to the control node through the socket file corresponding to the serial device.
The control node executes step S10: and updating the database according to the configuration result, thereby completing the configuration update of load balance.
Because the load balancer is configured in the DR mode, the configuration request may include information that the load balancer is in the direct routing mode, and may further include a virtual IP address, and the virtual IP address may be configured on a virtual network card of the virtual server, and the ARP request for the virtual IP address may be masked, for example, the ARP request for the virtual IP address may be masked by an iptable or a stream rule. If the ARP request for the virtual IP address is not masked, the virtual server will return an ARP response to the client, which will result in an erroneous APR response being returned to the client since one compute node has multiple virtual servers. Wherein the client may be another virtual server.
It is worth mentioning that the load balancer is configured to be in a direct routing mode, in this mode, the response traffic of the backend server does not need to pass through the load balancer of the network node any more, but is directly sent to the client, as shown in fig. 4, and the content of one request response traffic is mostly much larger than that of the request traffic, so that the performance of the load balancer can be significantly improved, the packet forwarding pressure of the switch can be reduced, and the response delay of the request can be shortened by using the agent program in cooperation with the DR mode. The configuration work can be automatically completed without manual intervention in the configuration process and obtaining the account password of the virtual server. The configuration work of the back-end server only depends on the agent program, and the issuing speed of the configuration request is very high. The rules need not be modified by means of data packets, simplifying the configuration steps.
In one embodiment, as shown in fig. 5, step 103 specifically performs the following sub-steps for the virtual server:
substep 1031: and acquiring the virtual IP address to be added in the configuration request.
Specifically, the configuration request includes a virtual IP address to be added, and since the load balancer is configured in the DR mode, the configuration request further includes information that the load balancer is in the DR mode, so that the agent program configures the virtual server according to the working mode of the load balancer and the virtual IP address to be added.
Substep 1032: and configuring the virtual IP address on a loopback network card in the virtual server.
Specifically, a loopback network card on the virtual server is acquired, and a virtual IP address is configured on the loopback network card.
Step 1033: and shielding an Address Resolution Protocol (ARP) request for the virtual IP address, which is initiated by the client.
Specifically, an ARP request initiated by the client for the virtual IP address is masked. By shielding ARP requests for virtual IP addresses, it is possible to avoid returning false ARP responses for clients. In different operating systems, there may be different ways of masking, and this example lists masking operations in windows operating systems and in Linux operating systems.
In a first manner, step 1033-2 is performed: a Linux operating system. And adjusting the kernel parameter in the virtual server to a preset first parameter value.
Specifically, the kernel parameters arp _ ignore and arp _ announce in Linux are configured, where arp _ ignore may be set to 1, and arp _ announce may be set to 2, where the first parameter value includes a parameter value 1 corresponding to the kernel parameter arp _ ignore and a parameter value 2 corresponding to the arp _ announce.
Since arp _ ignore is 1, it means that only the mac address requested is the own network card. For example, one server has two network cards, one of which receives the arp request, but the mac address of the request is the other network card of the server, so that the current network card will not respond. an arp _ announce value of 2 indicates that the most appropriate network card is always used for responding. One host has a plurality of network cards, one of the network cards receives the ARP request, finds that the requested mac address is of the other network card of the host, the network card which receives the ARP request does not respond, and responds only when the requested mac address is the address of the network card.
In a second mode, step 1033-4 is performed: in the windows operating system, the parameter on the loopback network card can be adjusted to a preset second parameter value.
Specifically, parameters in the loopback network card may be adjusted in the windows operating system, for example, the weakhostsend is set to enabled and the weakhostweeve is set to enabled. The second parameter value is "enable".
In this embodiment, by configuring the loopback network card and shielding the virtual IP address, the configuration of the DR mode can be realized through simple operations, and at the same time, the ARP request for the virtual IP address is shielded, thereby ensuring that the configured virtual server accurately receives the data message information.
Further, kernel parameter detection in the virtual server may be performed before step 1033-2; the flow of the detection can be as shown in fig. 6.
Step 1033-0: detecting whether the kernel parameter in the virtual server is the first parameter value, and if the kernel parameter is not the first parameter value, performing an operation of adjusting the kernel parameter in the virtual server to the preset first parameter value, that is, performing step 1033-2. Otherwise, step 1033-1 is performed: and determining that the kernel parameters are correct.
Step 1033-1: and determining that the kernel parameters are correct.
Further, before step 1033-4 is executed, a parameter of a loopback network card of the virtual server may be detected; the flow of the detection can be as shown in fig. 7.
Step 1033-3: and detecting whether the parameter of the loopback network card in the virtual server is a second parameter value, if the kernel parameter is not the second parameter value, executing an operation of adjusting the parameter of the loopback network card to a preset second parameter value, namely executing step 1033-4. Otherwise, step 1033-5 is performed: and determining that the parameters on the loopback network card are correct.
In this embodiment, by detecting whether the parameter on the loopback network card is the second parameter value or detecting whether the kernel parameter is the first parameter value, if so, determining that the parameter of the loopback network card is correct or the kernel parameter is correct, unnecessary configuration actions can be reduced, and the configuration speed is increased.
In one embodiment, in order to avoid disappearance of the virtual IP address after the virtual server is restarted, step 1032-1 may be performed after step 1032 is performed, as shown in fig. 8:
step 1032-1: and saving the virtual IP address into a configuration file of the computing node or a registry of the computing node.
Specifically, if the virtual server is restarted, the virtual IP on the loopback network card will disappear, and in the windows operating system, the persistent storage of the virtual IP address can be realized by storing the virtual IP address in the registry. In the Linux operating system, the persistent storage of the virtual IP address is realized by storing the virtual IP address into a configuration file.
In one embodiment, steps 1032-00 and 1032-03 may be performed before step 1032 is performed in order to ensure that the pair can successfully configure a virtual IP address. The flow is shown in fig. 9.
Step 1032-00: and detecting whether a loop network card exists in the virtual server, and if the loop network card does not exist, executing the step 1032-01. If the loopback network card exists, step 1032-02 is performed.
Step 1032-01: a loopback network card is created on the virtual server.
Step 1032-02: if the loopback network card exists, detecting whether the loopback network card is started, and if the loopback network card is not started, executing the step 1032-03.
Step 1032-03: and starting the loop-back network card.
In this embodiment, by detecting whether a loopback network card exists in the virtual server, when the loopback network card does not exist, the loopback network card is created, and the loopback network card is started at the same time, so as to ensure that a virtual IP address is subsequently configured on the loopback network card.
In one embodiment, after the communication channel is formed, the configuration of the virtual server can be cancelled through the communication channel. The flow is shown in fig. 10:
step 101: a corresponding agent is deployed on the virtual server.
Step 102: and connecting the socket file on the computing node with the serial port equipment in the virtual server to form a communication channel between the computing node and the virtual server, so that the agent program can configure the virtual server as a back-end server of a load balancer.
Step 104: and if the agent program acquires a cancellation request issued by the computing node through the communication channel, the agent program deletes the configured virtual IP address from the virtual server according to the cancellation request, and the cancellation request is used for indicating to cancel the configuration of the load balancer.
Specifically, the agent program judges whether the loopback network card already exists, and if not, the process is ended. And judging whether the virtual IP is configured on the loopback network card or not, if so, canceling the configuration, and removing the virtual IP from the configuration file or the registry.
In this embodiment, the configuration of the virtual server is cancelled through the agent program, and the cancellation is flexible.
In one embodiment, after configuring the virtual server, the computing node periodically receives a health check on the virtual server to check whether the virtual server is available, and if not, marks the virtual server as unavailable so that the load balancer does not forward the packet to the unavailable virtual server.
The process of the agent to configure the virtual server as a back-end server for the load balancer is described below with reference to specific interaction FIG. 11.
The tenant selects a virtual server in the cloud platform to add as a forwarding group through the control interface, namely selects the virtual server to add as a back-end server of the load balancer, and is used for providing services for user traffic.
Accordingly, the control node sequentially performs the following steps S0-S3:
step S0: and receiving information for adding the virtual server as a forwarding group.
Step S1: and verifying the validity of the information and storing the added information in a local database.
Step S2: and returning an adding result to the tenant through the control interface so as to respond to the adding operation of the tenant.
Step S3: and sending an update request of the load balancing table to the network node of the tenant.
Correspondingly, after receiving the update request sent by the control node, the network node sequentially executes the following steps S4 to S5:
step S4: and updating the load balancing table, and completing corresponding deployment updating of load balancing by updating the keepalived configuration file and reloading the keepalived process.
Step S5: and returning the deployment result of load balancing to the control node.
Correspondingly, after receiving the deployment result returned by the network node, the control node sequentially executes the following steps S6 to S7:
step S6: the database is updated.
Step S7: the compute node is notified to send a configuration request to the agent.
Correspondingly, after the agent program on the computing node receives the configuration request through the serial device, the steps S8 to S9 are sequentially executed:
step S8 is executed: and the agent program reads the configuration request through the serial port equipment.
Step S9: and returning a configuration result to the control node.
The control node executes step S10: and updating the database according to the configuration result, thereby completing the configuration update of load balance.
The above steps S0 to S10 are the same as the steps S0 to S10 in fig. 3, and are not described in detail here.
The network node performs step S11: a request for a health check periodically initiated to a compute node.
Specifically, the back-end server of the load balancer has an unavailable condition, such as shutdown and service stop, so to avoid a situation that a request initiated by a client fails, a network node where the load balancer is located will periodically initiate a health check request to a computing node, and a time interval may be 3 seconds or 5 seconds.
The computing node performs step S12: and carrying out health check on each virtual server and returning a check result to the network node.
Specifically, each virtual server performs health check by itself and returns a check result to the network node. If the check result indicates that there is an unavailable virtual server, the load balancer on the network node may set the unavailable virtual server in the forwarding group to inactive, thereby avoiding forwarding requests sent by clients to the unavailable virtual server.
The above embodiments can be mutually combined and cited, for example, the following embodiments are examples after being combined, but not limited thereto; the embodiments can be arbitrarily combined into a new embodiment without contradiction.
An embodiment of the present application further provides a server, as shown in fig. 12: at least one processor 201; and a memory 202 communicatively coupled to the at least one processor 201; the memory 202 stores instructions executable by the at least one processor 201, and the instructions are executed by the at least one processor 201, so that the at least one processor 201 can execute the above configuration method of the load balancer.
The memory 202 and the processor 201 are connected by a bus, which may include any number of interconnected buses and bridges that link one or more of the various circuits of the processor 201 and the memory 202. The bus may also link various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor is transmitted over a wireless medium via an antenna, which further receives the data and transmits the data to the processor.
The processor 201 is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And the memory may be used to store data used by the processor in performing operations.
An embodiment of the present application further provides a computer-readable storage medium storing a computer program. The computer program realizes the above-described method embodiments when executed by a processor.
Those skilled in the art can understand that all or part of the steps in the method of the foregoing embodiments may be implemented by a program to instruct related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, etc.) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.

Claims (11)

1. A configuration method of a load balancer is applied to a computing node, the computing node is connected with the load balancer in a communication mode, at least two virtual servers are deployed on the computing node, and the method comprises the following steps:
for each of the virtual servers: deploying a corresponding agent program on the virtual server, wherein the agent program is bound with serial port equipment of the virtual server;
and connecting the socket file on the computing node with the serial port device to form a communication channel between the computing node and the virtual server, so that the agent program configures the virtual server as a back-end server of the load balancer.
2. The method of configuring a load balancer according to claim 1, wherein after forming a communication channel between the compute node and the virtual server, the method further comprises:
and if the agent program acquires a configuration request issued by the computing node through the communication channel, the agent program configures the virtual server as a back-end server of a load balancer according to the configuration request, and the configuration request comprises information that the load balancer is in a direct routing mode.
3. The method of claim 2, wherein the agent configures the virtual server as a backend server of the load balancer according to the configuration request, comprising:
acquiring a virtual IP address in the configuration request;
configuring the virtual IP address on a loopback network card in the virtual server;
and shielding an Address Resolution Protocol (ARP) request for the virtual IP address, which is initiated by the client.
4. The method of configuring a load balancer according to claim 3, wherein after said configuring the virtual IP address on a loopback network card in the virtual server, the method further comprises:
and saving the virtual IP address into a configuration file of the computing node or a registry of the computing node.
5. The method for configuring the load balancer according to claim 3 or 4, wherein the masking of the Address Resolution Protocol (ARP) request for the virtual IP address initiated by the client comprises:
adjusting the kernel parameter in the virtual server to a preset first parameter value;
alternatively, the first and second electrodes may be,
and adjusting the parameters on the loop network card to preset second parameter values.
6. The method of configuring a load balancer according to claim 5, wherein the method further comprises:
detecting whether the kernel parameter in the virtual server is the first parameter value, and if the kernel parameter is not the first parameter value, executing an operation of adjusting the kernel parameter in the virtual server to a preset first parameter value;
alternatively, the first and second electrodes may be,
and detecting whether the parameter of the loop network card in the virtual server is the second parameter value, and if the kernel parameter is not the second parameter value, executing an operation of adjusting the parameter on the loop network card to a preset second parameter value.
7. The method of configuring a load balancer according to claim 3 or 4, wherein the method further comprises:
detecting whether the loopback network card exists in the virtual server, and if the loopback network card does not exist, establishing the loopback network card on the virtual server; and if the loopback network card exists, the operation of configuring the virtual IP address on the loopback network card in the virtual server is executed.
8. The method of configuring a load balancer according to claim 7, wherein before said performing the operation of configuring the virtual IP address on a loopback card in the virtual server, the method further comprises:
and detecting whether the loopback network card is started or not, and starting the loopback network card if the loopback network card is not started.
9. The method of configuring a load balancer according to claim 1, wherein the method further comprises:
and if the agent program acquires a cancellation request issued by the computing node through the communication channel, the agent program deletes the configured virtual IP address from the virtual server according to the cancellation request, wherein the cancellation request is used for indicating to cancel the virtual IP address of the load balancer.
10. A server, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of configuring a load balancer as claimed in any one of claims 1-9.
11. A computer-readable storage medium, storing a computer program, wherein the computer program, when executed by a processor, implements the method for configuring a load balancer of any one of claims 1 to 9.
CN202110758268.1A 2021-07-05 2021-07-05 Configuration method of load balancer, server and storage medium Pending CN113691389A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110758268.1A CN113691389A (en) 2021-07-05 2021-07-05 Configuration method of load balancer, server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110758268.1A CN113691389A (en) 2021-07-05 2021-07-05 Configuration method of load balancer, server and storage medium

Publications (1)

Publication Number Publication Date
CN113691389A true CN113691389A (en) 2021-11-23

Family

ID=78576675

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110758268.1A Pending CN113691389A (en) 2021-07-05 2021-07-05 Configuration method of load balancer, server and storage medium

Country Status (1)

Country Link
CN (1) CN113691389A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114979277A (en) * 2022-05-23 2022-08-30 江苏保旺达软件技术有限公司 Network request forwarding method and device, computer equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103441867A (en) * 2013-08-15 2013-12-11 国云科技股份有限公司 Method for updating internal network resource allocation of virtual machine
CN103595801A (en) * 2013-11-18 2014-02-19 中标软件有限公司 Cloud computing system and real-time monitoring method for virtual machine in cloud computing system
CN103685567A (en) * 2013-12-31 2014-03-26 曙光云计算技术有限公司 Virtual application server configuration method under cloud environment
CN106815059A (en) * 2016-12-31 2017-06-09 广州勤加缘科技实业有限公司 Linux virtual server LVS automates O&M method and operational system
WO2019100605A1 (en) * 2017-11-21 2019-05-31 平安科技(深圳)有限公司 Platform-as-a-service paas container platform construction method, server, system, and storage medium
CN110011842A (en) * 2019-03-28 2019-07-12 山东超越数控电子股份有限公司 A kind of initiated configuration method of Virtual cluster

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103441867A (en) * 2013-08-15 2013-12-11 国云科技股份有限公司 Method for updating internal network resource allocation of virtual machine
CN103595801A (en) * 2013-11-18 2014-02-19 中标软件有限公司 Cloud computing system and real-time monitoring method for virtual machine in cloud computing system
CN103685567A (en) * 2013-12-31 2014-03-26 曙光云计算技术有限公司 Virtual application server configuration method under cloud environment
CN106815059A (en) * 2016-12-31 2017-06-09 广州勤加缘科技实业有限公司 Linux virtual server LVS automates O&M method and operational system
WO2019100605A1 (en) * 2017-11-21 2019-05-31 平安科技(深圳)有限公司 Platform-as-a-service paas container platform construction method, server, system, and storage medium
CN110011842A (en) * 2019-03-28 2019-07-12 山东超越数控电子股份有限公司 A kind of initiated configuration method of Virtual cluster

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114979277A (en) * 2022-05-23 2022-08-30 江苏保旺达软件技术有限公司 Network request forwarding method and device, computer equipment and storage medium
CN114979277B (en) * 2022-05-23 2024-03-05 江苏保旺达软件技术有限公司 Network request forwarding method and device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
US10992637B2 (en) Detecting hardware address conflicts in computer networks
US20180203719A1 (en) Image file conversion method and apparatus
US11856065B2 (en) Data transmission for service integration between a virtual private cloud and an intranet
EP3668056B1 (en) Method and device for transmitting data processing request
CN110572439B (en) Cloud monitoring method based on metadata service and virtual forwarding network bridge
US20080184354A1 (en) Single sign-on system, information terminal device, single sign-on server, single sign-on utilization method, storage medium, and data signal
US10678465B2 (en) Seamless migration of storage volumes between storage arrays
JP2009500702A (en) Method and system for managing virtual instances of physical ports attached to a network
US11201760B2 (en) Data forwarding method and apparatus based on operating system kernel bridge
US20150372854A1 (en) Communication control device, communication control program, and communication control method
CN113691389A (en) Configuration method of load balancer, server and storage medium
JP2018510538A (en) Network sharing method and apparatus
CN110324202B (en) Method and device for detecting line quality
US20130081139A1 (en) Quarantine network system, server apparatus, and program
CN113626139B (en) High-availability virtual machine storage method and device
EP3407553A1 (en) Pppoe message transmission method and pppoe server
CN115185637A (en) Communication method and device for PaaS component management end and virtual machine agent
US20200201667A1 (en) Virtual machine live migration method, apparatus, and system
CN114697191A (en) Resource migration method, device, equipment and storage medium
CN109218415B (en) Distributed node management method, node and storage medium
US9535874B2 (en) Host embedded controller interface bridge
CN113194115A (en) Method for automatically deploying client, network equipment and storage medium
CN109039680B (en) Method and system for switching main Broadband Network Gateway (BNG) and standby BNG and BNG
CN111884837A (en) Migration method and device of virtual encryption machine and computer storage medium
KR102482151B1 (en) System and method for transmitting and receiving data based on bridgehead network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination