CN112104566A - Load balancing processing method and device - Google Patents

Load balancing processing method and device Download PDF

Info

Publication number
CN112104566A
CN112104566A CN202010989058.9A CN202010989058A CN112104566A CN 112104566 A CN112104566 A CN 112104566A CN 202010989058 A CN202010989058 A CN 202010989058A CN 112104566 A CN112104566 A CN 112104566A
Authority
CN
China
Prior art keywords
message
instance
computing node
target
load balancing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010989058.9A
Other languages
Chinese (zh)
Other versions
CN112104566B (en
Inventor
陈佳业
肖福龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202010989058.9A priority Critical patent/CN112104566B/en
Publication of CN112104566A publication Critical patent/CN112104566A/en
Application granted granted Critical
Publication of CN112104566B publication Critical patent/CN112104566B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application discloses a load balancing processing method and device. The method comprises the following steps: obtaining a first message, wherein the first message is used for executing load balancing access service on a first computing node, and the first message comprises link information; when the first message is determined to be from the first computing node and no link information exists on the first computing node, determining a target back-end instance according to the first message, wherein the target back-end instance is used for responding to a request of an instance in the first computing node; when the target back-end instance is determined to run on the first computing node, determining a source address and a destination address of the first message according to the target back-end instance, and storing the source address and the destination address into link information to obtain a second message; and sending the second message to the target back-end instance. By the method and the device, the problem of overhigh load of the load balancing network node in the related technology is solved.

Description

Load balancing processing method and device
Technical Field
The present application relates to the field of load balancing technologies, and in particular, to a load balancing processing method and apparatus.
Background
The current technical scheme for realizing four-layer load balancing flow forwarding in the cloud mainly comprises two types: the method comprises the steps that firstly, a load balancing service is provided through a centralized load balancing gateway, and the flow of the instance in the cloud which needs to access the load balancing service is processed in a centralized mode; and secondly, the distributed load balancing flow is processed through the traditional load balancing physical equipment. For the centralized load balancing technical scheme, any instance with access load balancing authority needs to import load balancing traffic into the corresponding centralized load balancing node for load balancing processing, which will cause the load balancing node to need to process a large amount of traffic, and will likely cause the load of the node to be very high, thereby reducing performance, causing problems of too high time delay, throughput reduction, packet loss and the like, and also causing poor expandability and wider access due to fault influence.
Compared with a centralized load balancing processing scheme, the distributed load balancing processing scheme based on the physical equipment uses the distributed load balancing to perform load balancing processing, so that the fault risk of the centralized load balancing is reduced, and the processing performance is improved. However, because the traditional physical load balancing can not be controlled in a centralized way, the difficulty of setting and adjusting forwarding strategies of the whole network is high, and the like, the flexibility of load balancing adjustment is greatly reduced, and at the moment, the reasonable expansion and contraction of the gateway according to the flow condition is difficult.
Aiming at the problem of overhigh load of load balancing network nodes in the related technology, an effective solution is not provided at present.
Disclosure of Invention
The present application mainly aims to provide a load balancing processing method and device, so as to solve the problem of an excessive load of a load balancing network node in the related art.
In order to achieve the above object, according to an aspect of the present application, there is provided a processing method for load balancing, the processing method being applied to at least one computing node in a distributed system, the method including: obtaining a first message, wherein the first message is used for executing a load balancing access service on a first computing node, and the first message comprises link information; when the first message is determined to be from a first computing node and the link information does not exist on the first computing node, determining a target back-end instance according to the first message, wherein the target back-end instance is used for responding to a request of the first computing node; when the target back-end instance is determined to run on the first computing node, determining a source address and a destination address of the first message according to the target back-end instance, and storing the source address and the destination address into the link information to obtain a second message; and sending the second message to the target back-end instance.
Further, after the first packet is obtained, the method includes: when the first message is determined to be from the first computing node and the first computing node has link information in the first message, determining the target backend instance according to backend instance information in the link information, and storing MAC layer information in the link information to an MAC layer of the first message to obtain a third message; and sending the third message to the target back-end instance.
Further, determining the target backend instance according to the first packet includes: calculating the hash value of the first message; calculating the hash value to obtain a back-end instance selection value; and determining the target backend instance according to the backend instance selection value.
Further, determining the target backend instance according to the backend instance selection value comprises: if the back-end instance selection value is matched with a first preset value, determining that the target back-end instance of the first message is a local instance; and if the rear-end instance selection value is matched with a second preset value, determining that the target rear-end instance of the first message is a far-end instance.
Further, after determining that the target backend instance of the first packet is a remote instance, the method includes: determining a source address and a destination address on the remote instance according to the target back-end instance, and storing the source address and the destination address into the link information of the first message to obtain a fourth message; and sending the fourth message to the target back-end instance.
Further, after the first packet is obtained, the method includes: and judging whether the IP address in the first message is matched with a preset address or not, and if the IP address in the first message is not matched with the preset address, stopping the processing of the load balancing access service.
Further, after the first packet is obtained, the method includes: if the first message is initiated by a second computing node and the link information in the first message exists in the first computing node, storing the back-end instance information in the link information in the first message to obtain a fifth message; and sending the fifth message to the target back-end instance.
In order to achieve the above object, according to another aspect of the present application, a processing apparatus for load balancing is provided. The device is applied to at least one computing node of a distributed system and comprises the following steps: an obtaining unit, configured to obtain a first packet, where the first packet is used to execute a load balancing access service on a first computing node, and the first packet includes link information; a first determining unit, configured to determine a target backend instance according to a first packet when the first packet is from a first computing node and the link information does not exist on the first computing node, where the target backend instance is used to respond to a request of the first computing node; a second determining unit, configured to determine, according to the target backend instance, a source address and a destination address of the first packet when the target backend instance operates on the first computing node, and store the source address and the destination address in the link information to obtain a second packet; and the sending unit is used for sending the second message to the target back-end instance.
In order to achieve the above object, according to another aspect of the present application, there is provided a computer-readable storage medium including a stored program, wherein the program performs the processing method of load balancing according to any one of the above.
In order to achieve the above object, according to another aspect of the present application, there is provided a processor configured to execute a program, where the program executes to perform the load balancing processing method described in any one of the above.
Through the application, the following steps are adopted: obtaining a first message, wherein the first message is used for executing a load balancing access service on a first computing node, and the first message comprises link information; when the first message is determined to be from a first computing node and the link information does not exist on the first computing node, determining a target back-end instance according to the first message, wherein the target back-end instance is used for responding to a request of the first computing node; when the target back-end instance is determined to run on the first computing node, determining a source address and a destination address of the first message according to the target back-end instance, and storing the source address and the destination address into the link information to obtain a second message; and sending the second message to the target back-end instance, so that the problem of overhigh load of the load balancing network node in the related technology is solved, and the effect of reducing the load of the load balancing network node is further achieved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application. In the drawings:
fig. 1 is a flowchart of a processing method for load balancing according to an embodiment of the present application;
fig. 2 is a flowchart of an optional load balancing processing method provided according to an embodiment of the present application; and
fig. 3 is a schematic diagram of a processing apparatus for load balancing according to an embodiment of the present application.
Detailed Description
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an embodiment of the application, a processing method for load balancing is provided.
Fig. 1 is a flowchart of a processing method of load balancing according to an embodiment of the present application. As shown in fig. 1, the method comprises the steps of:
step S101, a first message is obtained, wherein the first message is used for executing the load balancing access service on the first computing node, and the first message includes link information.
When a certain node in the network receives the load balancing access request, the corresponding processing is executed by acquiring the message information of the request and initiating the request locally or remotely.
Optionally, in the load balancing processing method provided in this embodiment of the present application, after the first packet is obtained, the method includes: when the first message is determined to be from the first computing node and the first computing node has the link information in the first message, determining a target back-end instance according to back-end instance information in the link information, and storing MAC layer information in the link information to an MAC layer of the first message to obtain a third message; and sending the third message to the target back-end instance.
Since each node in the distributed network is deployed with an instance program that provides load-balancing access service processing, after a certain node receives a load balancing access request, by acquiring message information of the request and judging whether the request is initiated locally or from a remote node, if the request is initiated by the local node, judging whether link information of the request exists locally, namely, judging whether the request is a new request or a request existing in a history record, if the link information of the request exists in the history record of the local node, the back-end instance information in the link information searched locally is used as the target back-end instance information of the request, and storing the back-end instance information in the link information into the MAC layer of the message of the request, and sending the message with the stored back-end instance information to the corresponding target back-end instance. For example, there are 3 compute nodes in a distributed network system: when the node 1 receives the load balancing access request and confirms that the request is initiated by the node 1 through analysis, the node 1 firstly searches whether the link information in the request exists locally, and if the link information in the request exists locally, namely, the node 1 has the same load balancing access request record as the request. In this case, the node 1 uses the back-end instance information in the locally found link as the target back-end instance information of the request, stores the back-end instance information in the locally found link information to the message MAC layer of the request, and then sends the message with the stored instance information to the corresponding target back-end instance.
Optionally, in the load balancing processing method provided in this embodiment of the present application, after the first packet is obtained, the method includes: and judging whether the IP address in the first message is matched with the preset address or not, and stopping processing the load balancing access service if the IP address in the first message is not matched with the preset address.
The load balancing access service processing instance program deployed in each network node matches the traffic of which the target IP is the virtual service IP, before the instance program further analyzes and processes the message information of the load balancing access service request, whether the target IP of the request is the virtual service IP preset by the instance program is judged, if not, the received message is lost, and the processing of the request is stopped.
Optionally, in the load balancing processing method provided in this embodiment of the present application, after the first packet is obtained, the method includes: if the first message is initiated by the second computing node and the link information in the first message exists in the first computing node, determining a target back-end instance according to back-end instance information in the link information; storing the back-end instance information into the first message to obtain a fifth message; and sending the fifth message to the target back-end instance.
Load balancing service access requests sent from a load balancing gateway or other computing node to a local computing node are identified by this feature as encapsulating the vxlan. If the request is identified not to come from the local computing node, but the link information in the request exists in the local computing node, it is indicated that the message of the request is not the first message of the link, the link has been created before, the local computing node directly performs the NAT operation, the back end in the local link information is used as the target back end instance information, the target back end instance information is stored in the message of the request, and then the message after the target back end instance information is stored is routed and sent to the corresponding target back end instance.
Step S102, when the first message is determined to be from the first computing node and no link information exists on the first computing node, determining a target back-end instance according to the first message, wherein the target back-end instance is used for responding to a request of the first computing node.
When a certain node receives a load balancing access request, the local computing node determines that the request is initiated by a local node but link information in the message of the request does not exist locally, that is, the request is a new request, and determines a target backend instance according to the message information of the request. For example, there are 3 compute nodes in a distributed network system: when the node 1 receives the load balancing access request and determines that the request is initiated by the node 1 through analysis, the node 1 firstly searches whether the link information in the request exists locally, and if the link information in the request does not exist locally, the current request is a brand new request for the node 1. In this case, the node 1 determines target backend instance information according to the message information of the current request, for example, the determined target backend instance information is instance information in the node 2, and then the node 1 takes an instance in the node 2 as a target backend instance of the current request.
Optionally, in the load balancing processing method provided in the embodiment of the present application, determining the target backend instance according to the first packet includes: calculating the hash value of the first message; calculating the hash value to obtain a back-end instance selection value; and determining the target back-end instance according to the back-end instance selection value.
When judging that the link information of the load balancing service access request does not exist locally, the local computing node determines a target back-end instance according to the message information of the request, computes the hash value of the message of the request, stores the hash value of the message in a specified register, and then performs computation processing on the value in the register, for example, performs the computation processing on the hash value in the register, takes the result obtained by the computation processing as a back-end instance selection value, and then determines the target back-end instance of the request according to the back-end instance selection value.
Optionally, in the load balancing processing method provided in the embodiment of the present application, determining the target backend instance according to the backend instance selection value includes: if the rear-end instance selection value is matched with the first preset value, determining that the target rear-end instance of the first message is a local instance; and if the rear-end instance selection value is matched with the second preset value, determining that the target rear-end instance of the first message is a far-end instance.
The method comprises the steps that a first preset value and a second preset value are preset in an instance program and used for determining whether a target back-end instance corresponding to a request is on a local computing node or other computing nodes, and the specific judgment method is that if a back-end instance selection value is matched with the first preset value, the target back-end instance of the request is determined to be on the local computing node; and if the rear-end instance selection value is matched with the second preset value, determining that the target rear-end instance of the request is a remote instance, namely the target rear-end instance of the request is on other computing nodes.
Optionally, in the load balancing processing method provided in this embodiment of the present application, after determining that the target backend instance of the first packet is a remote instance, the method includes: determining a source address and a destination address according to the target back-end example, and storing the source address and the destination address into the link information of the first message to obtain a fourth message; and sending the fourth message to the target back-end instance.
If the target back-end instance corresponding to the request is not on the local computing node and is on the far-end computing node, the local computing node determines the source address and the destination address of the message according to the information of the selected target back-end instance, executes the link submitting action at the same time, stores the back-end instance information on the appointed label of the link, facilitates the subsequent message directly using the information, and finally sends the message with the stored link information to the corresponding target back-end instance through the routing processing.
Step S103, when the target back-end instance is determined to run on the first computing node, determining a source address and a destination address of the first message according to the target back-end instance, and storing the source address and the destination address into link information to obtain a second message;
if the target backend instance corresponding to the request is on the local computing node, that is, the instance initiating the load balancing access request and the scheduling selected backend instance are on the same computing node, the local computing node only needs to determine the source address and the destination address of the message according to the selected target backend instance information, set the connection information (without executing a link submission action), and store the backend instance information on the designated label of the link, so that the information can be directly used by subsequent messages linked with the local computing node.
And step S104, sending the second message to the target back-end instance.
And when the target back-end instance is on the local computing node, sending the message with the link information stored to the target back-end instance of the local computing node.
Preferably, in the application, the instance programs deployed on each computing node are uniformly implemented on the basis of a multi-level flow table designed by an Openflow protocol, the Openflow is a protocol for connecting a controller and SDN devices, and through the uniform Openflow protocol, development of additional connecting components for each node is not required, so that uniform centralized control over network nodes is realized. Fig. 2 is a flowchart of a load balancing access service processing method provided according to an embodiment of the present application, where the flow is implemented based on seven flow tables of an Openflow protocol, including UNICAST, EGRESS, INGRESS, VIP, multicast, DNAT, and DVR 7 flow tables, and a designed scene mainly has the following three aspects: (1) the back-end instance selected by the scheduler is just above the current compute node. (2) The access load balancing service initiated by the source computing node is on the remote computing node through the back-end instance selected by the scheduler. (3) DNAT is needed to be made on the target computing node according to the back-end instance selected by the source computing node, and the virtual service IP is converted into the IP of the specific target back-end instance. The following are specific rules designed in each flow table:
a) UNICAST flow table specification rules
Figure BDA0002690220830000081
Rule 1: the flow from the far-end computing node or the load balancing gateway node is coming through encapsulating the vxlan header, so that the flow can be considered to be the flow from the far-end computing node or the load balancing gateway node only by judging that the message inlet is the vxlan port at the local computing node, and the flow is directly sent to the INGRESS flow table for processing.
Rule 2: except for the incoming traffic of the vxlan port, the default other traffic is the load balancing access service initiated by the local virtual machine or the container instance, so the default rule is to send the local traffic to the EGRESS table flow processing.
b) INGRESS flow table specific rules
Figure BDA0002690220830000082
Rule 1: matching the destination IP in the INGRESS flow table as the flow of the virtual service IP, and executing a ct (nat) action, wherein the successful execution is carried out on the premise that a link exists locally, if the link does not exist locally, the action is not executed, but is submitted to a DNAT flow table to execute a nat operation and submit the link, and the ct (nat) action can be executed next time by the rule.
Rule 2: other processing, not discussed herein.
c) EGRESS flow table concrete rules
Figure BDA0002690220830000091
Rule 1: and matching the target IP in the EGRESS flow table to be the flow of the preset virtual service IP, if the target IP is matched with the flow of the preset virtual service IP, performing local link searching processing, and sending the link to the VIP flow table for processing when the link is searched or the link is not searched.
Rule 2: and the flow of the default non-access preset virtual service IP is discarded.
d) VIP flow table concrete rules
Figure BDA0002690220830000092
Rule 1: and the specified label (ct _ label) is equal to 0 and is the flow for accessing the load balancing service, which indicates that no link is found locally, and then the hash value is calculated according to the message information, loaded onto the specified register and finally submitted to the multicast flow table.
Rule 2: specifying that the label (ct _ label) is equal to 0 and that the traffic that is not accessing the load balancing service is abnormal traffic, a drop action is directly performed.
Rule 3: the default rule indicates that the designated label (ct _ label) is not equal to 0, that is, the linked traffic is found locally, for this part of traffic, scheduling selection of a back-end instance is not required, and only the back-end instance information stored on the designated label (ct _ label) needs to be loaded into the MAC layer of the message and then submitted to the DVR.
e) MULTIPATH flow table specific rules
Figure BDA0002690220830000101
Rule 1: and storing the hash value of the calculation message in the VIP flow table on a designated register, performing AND calculation according to the designated register and 65535 in the MULTIPATH flow table, if the result is equal to x, executing the operation of setting the source MAC address and the destination MAC address, simultaneously storing the source MAC address and the destination MAC address on a link, submitting the link, and then submitting the link to the DVR flow table, wherein the rule is used for preventing a target back-end instance from being on a source calculation node.
Rule 2: and storing the hash value of the calculated message in the VIP flow table on a designated register, performing AND calculation according to the designated register and 65535 in the MULTIPATH flow table, if the result is equal to y, indicating that the target back-end instance is on the current calculation node, performing setting of the source MAC address and the destination MAC address, and submitting the destination MAC address to the DNAT flow table.
f) DNAT flow table specification rules
Figure BDA0002690220830000102
Figure BDA0002690220830000111
Rule 1: and when the MAC address of the selected back-end instance, the accessed virtual service IP, the virtual service port and the specified state variable (ct _ state) are all in accordance with the conditions at the same time according to the scheduling, executing a dnat operation, storing the MAC address of the selected back-end instance on the link, so that the data on the specified label (ct _ label) can be directly used when the subsequent messages with the link are searched, and finally the data are submitted to a DVR flow table.
Rule 2: default rules, directly submitted to the DVR flow table.
As can be known from the rule description of each flow table, the specific steps of the compute node accessing the load balancing service in the same cluster may be as follows:
for the access of the load balancing service initiated by the local computing node, firstly, the access enters the EGRESS flow table from the UNICAST flow table, and local link lookup is executed in the EGRESS flow table, and the step is divided into two cases: 1. if no link exists, the message is the first message of the link, the message is sent to the VIP flow table, the hash value of the message is calculated according to the information of the five-tuple of the message and is stored on a designated register, then the message is sent to the MULTIPATH flow table, and the specific back-end example is hit according to the value on the register, wherein the back-end examples are divided into two types: the selected back-end instance is not on the local computing node, and on the far-end computing node, the source and destination MAC addresses are set, the link submitting action is executed, and the back-end instance information is stored on the designated label (ct _ label) of the link, so that the information can be directly used by the subsequent messages linked with the link, and finally the messages are sent to the distributed routing flow table DVR for routing operation. Secondly, the selected back-end instance is at the computing node, and the initiated instance and the scheduling selected back-end instance are both in the same computing node, so that link information does not need to be submitted in the MULTIPATH flow table, only source and destination MAC addresses need to be set, then the link information is sent to the DNAT flow table for a DNAT operation, meanwhile, the back-end instance information is stored on a linked designated label (ct _ label), and finally the back-end instance information is also sent to the distributed routing flow table DVR for a routing operation. 2. If the link exists, the message is not the first message of the link, the link is created before, and the selected backend instance information is stored on the specified label (ct _ label), the link information only needs to be loaded into the source and destination MAC address fields in the VIP flow table, and finally the link information is also sent to the distributed routing flow table DVR for routing operation.
For a request for accessing a load balancing service from a load balancing gateway or other computing node, since vxlan is encapsulated, an INGRESS flow table is sent from a UNICAST flow table, and ct (nat) operation is performed in the INGRESS flow table, which also has two cases: if no link exists locally, the message is the first message of the link, the message is sent to a DNAT flow table to carry out a DNAT action, and the back-end instance information is stored on the link and sent to a distributed routing flow table DVR to carry out routing operation. If the local link exists, the message is not the first message of the link, the link is established before, nat operation is directly carried out, DNAT flow tables are sent firstly to hit default rules, then the DNAT flow tables are sent to the distributed routing flow table DVR, and routing operation is carried out.
Through the design of a distributed load balancing scheme of a multilevel flow table based on an Openflow protocol, firstly, the east-west load balancing flow in the cloud is not required to be forwarded to a network node, the processing of the load balancing service flow is realized on a computing node, and the load of the network node is reduced; secondly, load balancing is carried out, DNAT links are scattered to computing nodes where rear-end examples are located, the fault influence range is reduced, the connection quantity of the whole cluster is increased, and the time delay of link table searching cannot be reduced; the method has great significance for centralized control of load balancing in the cloud again, and control components of another set of protocols do not need to be developed to manage the load balancing related resources; finally, the distributed load balance realized by the multi-stage flow tables can completely achieve horizontal expansion and contraction through user control.
To sum up, in the load balancing processing method provided in the embodiment of the present application, a first packet is obtained, where the first packet is used to execute a load balancing access service on a first computing node, and the first packet includes link information; when the first message is determined to be from the first computing node and no link information exists on the first computing node, determining a target back-end instance according to the first message, wherein the target back-end instance is used for responding to a request of the first computing node; when the target back-end instance is determined to run on the first computing node, determining a source address and a destination address of the first message according to the target back-end instance, and storing the source address and the destination address into link information to obtain a second message; and sending the second message to the target back-end instance. The problem of too high load of the load balancing network nodes in the related technology is solved. And further, the effect of reducing the load of the load balancing network node is achieved.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
The embodiment of the present application further provides a processing apparatus for load balancing, and it should be noted that the processing apparatus for load balancing in the embodiment of the present application may be used to execute the processing method for load balancing provided in the embodiment of the present application. The following describes a processing apparatus for load balancing according to an embodiment of the present application.
Fig. 3 is a schematic diagram of a processing apparatus for load balancing according to an embodiment of the present application. As shown in fig. 3, the apparatus is applied to at least one computing node of a distributed system, and includes: an acquisition unit 301, a first determination unit 302, a second determination unit 303, and a transmission unit 304.
Specifically, the obtaining unit 301 is configured to obtain a first packet, where the first packet is used to execute a load balancing access service on a first computing node, and the first packet includes link information;
a first determining unit 302, configured to determine a target backend instance according to a first packet when the first packet is from a first computing node and no link information exists on the first computing node, where the target backend instance is configured to respond to a request of the first computing node;
a second determining unit 303, configured to determine, according to the target backend instance, a source address and a destination address of the first packet when the target backend instance operates on the first computing node, and store the source address and the destination address in the link information to obtain a second packet;
a sending unit 304, configured to send the second packet to the target backend instance.
To sum up, in the processing apparatus for load balancing provided in the embodiment of the present application, the obtaining unit 301 obtains a first packet, where the first packet is used to execute a load balancing access service on a first computing node, and the first packet includes link information; the first determining unit 302 determines a target backend instance according to the first packet when the first packet is from the first computing node and no link information exists on the first computing node, where the target backend instance is used to respond to a request of the first computing node; the second determining unit 303 determines the source address and the destination address of the first message according to the target backend instance when the target backend instance runs on the first computing node, and stores the source address and the destination address into the link information to obtain a second message; the sending unit 304 sends the second message to the target backend instance. The problem of too high load of the load balancing network nodes in the related technology is solved. And further, the effect of reducing the load of the load balancing network node is achieved.
Optionally, in the processing apparatus for load balancing provided in the embodiment of the present application, the apparatus includes: a third determining unit, configured to determine, after obtaining the first packet, a target backend instance according to backend instance information in the link information when the first packet is from the first computing node and the first computing node has link information in the first packet, and store the backend instance information in the link information to an MAC layer of the first packet, to obtain a third packet; and the second sending unit is used for sending the third message to the target back-end instance.
Optionally, in the processing apparatus for load balancing provided in the embodiment of the present application, the first determining unit includes: the first calculation module is used for calculating the hash value of the first message; the second calculation module is used for calculating the hash value to obtain a rear-end instance selection value; and the determining module is used for determining the target back-end instance according to the back-end instance selection value.
Optionally, in the processing apparatus for load balancing provided in the embodiment of the present application, the determining module includes: the fourth determining submodule is used for determining the target back-end example of the first message as the local example under the condition that the back-end example selection value is matched with the first preset value; and the fifth determining submodule is used for determining that the target rear-end example of the first message is the far-end example under the condition that the rear-end example selection value is matched with the second preset value.
Optionally, in the processing apparatus for load balancing provided in the embodiment of the present application, the apparatus includes: a sixth determining unit, configured to determine, after determining that the target backend instance of the first message is a remote instance, a source address and a destination address according to the target backend instance, and store the source address and the destination address in link information of the first message to obtain a fourth message; and the third sending unit is used for sending the fourth message to the target back-end instance.
Optionally, in the processing apparatus for load balancing provided in the embodiment of the present application, the apparatus includes: and the judging unit is used for judging whether the IP address in the first message is matched with the preset address or not after the first message is acquired, and stopping the processing of the load balancing access service if the IP address in the first message is not matched with the preset address.
Optionally, in the processing apparatus for load balancing provided in the embodiment of the present application, the apparatus includes: a seventh determining unit, configured to determine, after the first packet is obtained, a target backend instance according to backend instance information in the link information when the first packet is initiated by the second computing node and the link information in the first packet exists in the first computing node; the storage unit is used for storing the back-end instance information into the first message to obtain a fifth message; and the fourth sending unit is used for sending the fifth message to the target back-end instance.
The processing device for load balancing includes a processor and a memory, the acquiring unit 301, the first determining unit 302, the second determining unit 303, the sending unit 304, and the like are all stored in the memory as program units, and the processor executes the program units stored in the memory to implement corresponding functions.
The processor comprises a kernel, and the kernel calls the corresponding program unit from the memory. The kernel can be set to be one or more, and the load of the load balancing network node is reduced by adjusting the kernel parameters.
The memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip.
An embodiment of the present invention provides a storage medium, on which a program is stored, where the program, when executed by a processor, implements the processing method for load balancing.
The embodiment of the invention provides a processor, which is used for running a program, wherein the processing method for load balancing is executed when the program runs.
The embodiment of the invention provides equipment, which comprises a processor, a memory and a program which is stored on the memory and can run on the processor, wherein the processor executes the program and realizes the following steps: obtaining a first message, wherein the first message is used for executing a load balancing access service on a first computing node, and the first message comprises link information; when the first message is determined to be from a first computing node and the link information does not exist on the first computing node, determining a target back-end instance according to the first message, wherein the target back-end instance is used for responding to a request of the first computing node; when the target back-end instance is determined to run on the first computing node, determining a source address and a destination address of the first message according to the target back-end instance, and storing the source address and the destination address into the link information to obtain a second message; and sending the second message to the target back-end instance.
The processor executes the program and further realizes the following steps: after the first message is acquired, the method comprises the following steps: when the first message is determined to be from the first computing node and the first computing node has link information in the first message, determining the target backend instance according to backend instance information in the link information, and storing the backend instance information in the link information to an MAC layer of the first message to obtain a third message; and sending the third message to the target back-end instance.
The processor executes the program and further realizes the following steps: determining a target backend instance according to the first packet comprises: calculating the hash value of the first message; calculating the hash value to obtain a back-end instance selection value; and determining the target backend instance according to the backend instance selection value.
The processor executes the program and further realizes the following steps: determining the target backend instance according to the backend instance selection value comprises: if the back-end instance selection value is matched with a first preset value, determining that the target back-end instance of the first message is a local instance; and if the rear-end instance selection value is matched with a second preset value, determining that the target rear-end instance of the first message is a far-end instance.
The processor executes the program and further realizes the following steps: after determining that the target backend instance of the first packet is a remote instance, the method includes: determining a source address and a destination address on the remote instance according to the target back-end instance, and storing the source address and the destination address into the link information of the first message to obtain a fourth message; and sending the fourth message to the target back-end instance.
The processor executes the program and further realizes the following steps: after the first message is acquired, the method comprises the following steps: and judging whether the IP address in the first message is matched with a preset address or not, and if the IP address in the first message is not matched with the preset address, stopping the processing of the load balancing access service.
The processor executes the program and further realizes the following steps: after the first message is acquired, the method comprises the following steps: if the first message is initiated by a second computing node and the link information in the first message exists in the first computing node, determining a target back-end instance according to back-end instance information in the link information; storing the back-end instance information in the link information into the first message to obtain a fifth message; and sending the fifth message to the target back-end instance. The device herein may be a server, a PC, a PAD, a mobile phone, etc.
The present application further provides a computer program product adapted to perform a program for initializing the following method steps when executed on a data processing device: obtaining a first message, wherein the first message is used for executing a load balancing access service on a first computing node, and the first message comprises link information; when the first message is determined to be from a first computing node and the link information does not exist on the first computing node, determining a target back-end instance according to the first message, wherein the target back-end instance is used for responding to a request of the first computing node; when the target back-end instance is determined to run on the first computing node, determining a source address and a destination address of the first message according to the target back-end instance, and storing the source address and the destination address into the link information to obtain a second message; and sending the second message to the target back-end instance.
When executed on a data processing device, is further adapted to perform a procedure for initializing the following method steps: after the first message is acquired, the method comprises the following steps: when the first message is determined to be from the first computing node and the first computing node has link information in the first message, determining the target backend instance according to backend instance information in the link information, and storing the backend instance information in the link information to an MAC layer of the first message to obtain a third message; and sending the third message to the target back-end instance.
When executed on a data processing device, is further adapted to perform a procedure for initializing the following method steps: determining a target backend instance according to the first packet comprises: calculating the hash value of the first message; calculating the hash value to obtain a back-end instance selection value; and determining the target backend instance according to the backend instance selection value.
When executed on a data processing device, is further adapted to perform a procedure for initializing the following method steps: determining the target backend instance according to the backend instance selection value comprises: if the back-end instance selection value is matched with a first preset value, determining that the target back-end instance of the first message is a local instance; and if the rear-end instance selection value is matched with a second preset value, determining that the target rear-end instance of the first message is a far-end instance.
When executed on a data processing device, is further adapted to perform a procedure for initializing the following method steps: after determining that the target backend instance of the first packet is a remote instance, the method includes: determining a source address and a destination address on the remote instance according to the target back-end instance, and storing the source address and the destination address into the link information of the first message to obtain a fourth message; and sending the fourth message to the target back-end instance.
When executed on a data processing device, is further adapted to perform a procedure for initializing the following method steps: after the first message is acquired, the method comprises the following steps: and judging whether the IP address in the first message is matched with a preset address or not, and if the IP address in the first message is not matched with the preset address, stopping the processing of the load balancing access service.
When executed on a data processing device, is further adapted to perform a procedure for initializing the following method steps: after the first message is acquired, the method comprises the following steps: if the first message is initiated by a second computing node and the link information in the first message exists in the first computing node, determining a target back-end instance according to back-end instance information in the link information; storing the back-end instance information in the link information into the first message to obtain a fifth message; and sending the fifth message to the target back-end instance.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A processing method for load balancing, which is applied to at least one computing node of a distributed system, includes:
obtaining a first message, wherein the first message is used for executing a load balancing access service on a first computing node, and the first message comprises link information;
when the first message is determined to be from a first computing node and the link information does not exist on the first computing node, determining a target back-end instance according to the first message, wherein the target back-end instance is used for responding to a request of the first computing node;
when the target back-end instance is determined to run on the first computing node, determining a source address and a destination address of the first message according to the target back-end instance, and storing the source address and the destination address into the link information to obtain a second message;
and sending the second message to the target back-end instance.
2. The method of claim 1, wherein after obtaining the first packet, the method comprises:
when the first message is determined to be from the first computing node and the first computing node has link information in the first message, determining the target backend instance according to backend instance information in the link information, and storing MAC layer information in the link information to an MAC layer of the first message to obtain a third message;
and sending the third message to the target back-end instance.
3. The method of claim 1, wherein determining a target backend instance from the first packet comprises:
calculating the hash value of the first message;
calculating the hash value to obtain a back-end instance selection value;
and determining the target backend instance according to the backend instance selection value.
4. The method of claim 3, wherein determining the target backend instance according to the backend instance selection value comprises:
if the back-end instance selection value is matched with a first preset value, determining that the target back-end instance of the first message is a local instance;
and if the rear-end instance selection value is matched with a second preset value, determining that the target rear-end instance of the first message is a far-end instance.
5. The method of claim 4, wherein after determining that the target backend instance of the first packet is a remote instance, the method comprises:
determining a source address and a destination address on the remote instance according to the target back-end instance, and storing the source address and the destination address into the link information of the first message to obtain a fourth message;
and sending the fourth message to the target back-end instance.
6. The method of claim 1, wherein after obtaining the first packet, the method comprises:
and judging whether the IP address in the first message is matched with a preset address or not, and if the IP address in the first message is not matched with the preset address, stopping the processing of the load balancing access service.
7. The method of claim 1, wherein after obtaining the first packet, the method comprises:
if the first message is initiated by a second computing node and the first computing node has the link information in the first message;
storing the back-end instance information in the link information into the first message to obtain a fifth message;
and sending the fifth message to the target back-end instance.
8. A processing apparatus for load balancing, wherein the processing apparatus is applied in at least one computing node of a distributed system, and comprises:
an obtaining unit, configured to obtain a first packet, where the first packet is used to execute a load balancing access service on a first computing node, and the first packet includes link information;
a first determining unit, configured to determine a target backend instance according to a first packet when the first packet is from a first computing node and the link information does not exist on the first computing node, where the target backend instance is used to respond to a request of the first computing node;
a second determining unit, configured to determine, according to the target backend instance, a source address and a destination address of the first packet when the target backend instance operates on the first computing node, and store the source address and the destination address in the link information to obtain a second packet;
and the sending unit is used for sending the second message to the target back-end instance.
9. A computer-readable storage medium, characterized in that the storage medium includes a stored program, wherein the program executes the processing method of load balancing according to any one of claims 1 to 7.
10. A processor, configured to execute a program, wherein the program executes the processing method for load balancing according to any one of claims 1 to 7.
CN202010989058.9A 2020-09-18 2020-09-18 Processing method and device for load balancing Active CN112104566B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010989058.9A CN112104566B (en) 2020-09-18 2020-09-18 Processing method and device for load balancing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010989058.9A CN112104566B (en) 2020-09-18 2020-09-18 Processing method and device for load balancing

Publications (2)

Publication Number Publication Date
CN112104566A true CN112104566A (en) 2020-12-18
CN112104566B CN112104566B (en) 2024-02-27

Family

ID=73758886

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010989058.9A Active CN112104566B (en) 2020-09-18 2020-09-18 Processing method and device for load balancing

Country Status (1)

Country Link
CN (1) CN112104566B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114237883A (en) * 2021-12-10 2022-03-25 北京天融信网络安全技术有限公司 Security service chain creation method, message transmission method, device and equipment

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016180188A1 (en) * 2015-10-09 2016-11-17 中兴通讯股份有限公司 Distributed link establishment method, apparatus and system
CN106797405A (en) * 2016-12-14 2017-05-31 华为技术有限公司 Distributed load equalizing system, health examination method and service node
CN107846364A (en) * 2016-09-19 2018-03-27 阿里巴巴集团控股有限公司 A kind for the treatment of method and apparatus of message
WO2018077184A1 (en) * 2016-10-26 2018-05-03 新华三技术有限公司 Traffic scheduling
US20180131583A1 (en) * 2016-11-07 2018-05-10 General Electric Company Automatic provisioning of cloud services
CN108449282A (en) * 2018-05-29 2018-08-24 华为技术有限公司 A kind of load-balancing method and its device
CN108476243A (en) * 2016-01-21 2018-08-31 华为技术有限公司 For the distributed load equalizing of network service function link
CN109587062A (en) * 2018-12-07 2019-04-05 北京金山云网络技术有限公司 Load-balancing information synchronous method, apparatus and processing equipment
CN110753072A (en) * 2018-07-24 2020-02-04 阿里巴巴集团控股有限公司 Load balancing system, method, device and equipment
US10554604B1 (en) * 2017-01-04 2020-02-04 Sprint Communications Company L.P. Low-load message queue scaling using ephemeral logical message topics
CN110995656A (en) * 2019-11-06 2020-04-10 深信服科技股份有限公司 Load balancing method, device, equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016180188A1 (en) * 2015-10-09 2016-11-17 中兴通讯股份有限公司 Distributed link establishment method, apparatus and system
CN108476243A (en) * 2016-01-21 2018-08-31 华为技术有限公司 For the distributed load equalizing of network service function link
CN107846364A (en) * 2016-09-19 2018-03-27 阿里巴巴集团控股有限公司 A kind for the treatment of method and apparatus of message
WO2018077184A1 (en) * 2016-10-26 2018-05-03 新华三技术有限公司 Traffic scheduling
US20180131583A1 (en) * 2016-11-07 2018-05-10 General Electric Company Automatic provisioning of cloud services
CN106797405A (en) * 2016-12-14 2017-05-31 华为技术有限公司 Distributed load equalizing system, health examination method and service node
US10554604B1 (en) * 2017-01-04 2020-02-04 Sprint Communications Company L.P. Low-load message queue scaling using ephemeral logical message topics
CN108449282A (en) * 2018-05-29 2018-08-24 华为技术有限公司 A kind of load-balancing method and its device
CN110753072A (en) * 2018-07-24 2020-02-04 阿里巴巴集团控股有限公司 Load balancing system, method, device and equipment
CN109587062A (en) * 2018-12-07 2019-04-05 北京金山云网络技术有限公司 Load-balancing information synchronous method, apparatus and processing equipment
CN110995656A (en) * 2019-11-06 2020-04-10 深信服科技股份有限公司 Load balancing method, device, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114237883A (en) * 2021-12-10 2022-03-25 北京天融信网络安全技术有限公司 Security service chain creation method, message transmission method, device and equipment

Also Published As

Publication number Publication date
CN112104566B (en) 2024-02-27

Similar Documents

Publication Publication Date Title
JP7417825B2 (en) slice-based routing
JP6526848B2 (en) Multipath Routing with Distributed Load Balancer
US10320683B2 (en) Reliable load-balancer using segment routing and real-time application monitoring
US11463511B2 (en) Model-based load balancing for network data plane
US11743176B2 (en) Packet processing method and system, and device
US10333780B2 (en) Method, apparatus and computer program product for updating load balancer configuration data
Desmouceaux et al. 6lb: Scalable and application-aware load balancing with segment routing
US11902108B2 (en) Dynamic adaptive network
CN107181681B (en) SDN two-layer forwarding method and system
US20210368006A1 (en) Request response method, device, and system applied to bit torrent system
Desmouceaux et al. SRLB: The power of choices in load balancing with segment routing
CN108124021B (en) Method, device and system for obtaining Internet Protocol (IP) address and accessing website
CN112104566A (en) Load balancing processing method and device
CN112087382B (en) Service routing method and device
KR101586474B1 (en) Apparatus and method for openflow routing
CN108023774B (en) Cross-gateway migration method and device
CN107707661B (en) Load balancing resource management method and device
CN112583740A (en) Network communication method and device
WO2021017970A1 (en) Method and apparatus for scheduling access request, medium, and device
US20170005916A1 (en) Network programming
CN116566989A (en) Server selection method and device
WO2018001057A1 (en) Message forwarding control method and device, and broadband access system
CN116455906A (en) CDN edge computing network-based secondary scheduling method, system and medium
CN118138425A (en) Method and device for managing edge equipment
CN118055081A (en) VLAN MAPPING implementation method, equipment and medium for polymerization port

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant