CN113918326A - Request processing method and device - Google Patents

Request processing method and device Download PDF

Info

Publication number
CN113918326A
CN113918326A CN202111152557.3A CN202111152557A CN113918326A CN 113918326 A CN113918326 A CN 113918326A CN 202111152557 A CN202111152557 A CN 202111152557A CN 113918326 A CN113918326 A CN 113918326A
Authority
CN
China
Prior art keywords
request
management server
address
load balancer
network card
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111152557.3A
Other languages
Chinese (zh)
Other versions
CN113918326B (en
Inventor
赵晓伟
刘云冲
矫恒浩
王宝云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Juhaokan Technology Co Ltd
Original Assignee
Juhaokan Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Juhaokan Technology Co Ltd filed Critical Juhaokan Technology Co Ltd
Priority to CN202111152557.3A priority Critical patent/CN113918326B/en
Publication of CN113918326A publication Critical patent/CN113918326A/en
Application granted granted Critical
Publication of CN113918326B publication Critical patent/CN113918326B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application provides a method and a device for processing a request, which are used for solving the problem that in the related art, a load balancer can send the request from a certain management server back to the management server, so that the request cannot be processed. The method is applied to a first management server, the first management server is any one of a plurality of management servers deployed on a control plane in a cloud network system, and the plurality of management servers are respectively connected with a load balancer, and the method comprises the following steps: the method comprises the steps that a first management server generates a first request, wherein a destination address in the first request is an address of a load balancer, and the first request is used for requesting a control plane to provide a first control service; and when determining that the address of the LO network card configuration deployed by the first management server comprises the address of the load balancer, the first management server provides a first control service according to the first request.

Description

Request processing method and device
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for processing a request.
Background
In the cloud platform cluster, a component for providing an interface service (API server) is included in the management server, so that the management server can respond to a request from the business server to provide a corresponding service for the business server. At present, in order to improve Availability (Availability) of a service, a plurality of management servers may be generally arranged in a cluster to provide a service for a service server in the cluster together, or each management server may also provide a service for each other, so as to achieve high Availability of the management server.
Since a plurality of management servers are arranged in the cluster, a processor needs to be configured to implement Load balancing among the management servers, for example, Load balancing among the management servers can be implemented by a Network Load Balancer (NLB). That is, the load balancer is connected to each management server included in the cluster, receives a request to be sent to the management server by the load balancer, and then forwards the request to a specific management server according to the load condition of each management server. However, in some cases, if a certain management server sends a request to the load balancer, the load balancer determines that the load of the management server is minimum, and the request is forwarded to the management server. In this case, when the management server finds that both the source address and the destination address in the request are addresses of itself after receiving the request, the management server cannot process the request, and thus a problem that some requests cannot be processed is caused.
Disclosure of Invention
The embodiment of the application provides a method and a device for processing a request, which are used for solving the problem that in the related art, a load balancer can send the request from a certain management server back to the management server, so that the request cannot be processed.
In a first aspect, an embodiment of the present application provides a method for processing a request, where the method is applied to a first management server, where the first management server is any one of a plurality of management servers deployed in a control plane in a cloud network system, and the plurality of management servers are respectively connected to a load balancer, and the method includes:
the first management server generates a first request, wherein a destination address in the first request is an address of the load balancer, and the first request is used for requesting the control plane to provide a first control service;
and when the first management server determines that the address of the LO network card configuration deployed by the first management server comprises the address of the load balancer, providing the first control service according to the first request.
Based on the scheme, the request generated by the management server of the control plane is processed by the management server and cannot be sent out, the request cannot be forwarded through the load balancer, and the problem that the request generated by some management servers cannot be processed is solved.
In a second aspect, an embodiment of the present application further provides an apparatus for processing a request, where the apparatus is applied to a first management server, or the apparatus is the first management server, where the first management server is any one of a plurality of management servers deployed in a control plane in a cloud network system, and the plurality of management servers are respectively connected to a load balancer, and the apparatus includes:
a request module, configured to generate a first request, where a destination address in the first request is an address of the load balancer, and the first request is used to request the control plane to provide a first control service;
and the processing module is used for providing the first control service according to the first request when determining that the address of the LO network card configuration deployed by the first management server comprises the address of the load balancer.
In a third aspect, an embodiment of the present application provides another apparatus for processing a request, including a memory and a processor;
a memory for storing process instructions;
and the processor is used for calling the process instruction stored in the memory and executing the method in any implementation manner of the first aspect to the third aspect according to the obtained process.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing computer instructions, which, when executed on a computer, cause the computer to perform the above method.
In addition, for technical effects brought by any one implementation manner of the second aspect to the fourth aspect, reference may be made to technical effects brought by different implementation manners of the first aspect, and details are not described here.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a diagram of a system architecture for processing a request according to an embodiment of the present application;
FIG. 2 is a flow chart of a method for processing a request according to an embodiment of the present application;
fig. 3 is a flowchart of an implementation method in a scenario of sending a request according to an embodiment of the present application;
fig. 4 is a flowchart of an implementation method in a scenario of receiving a request according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an apparatus for processing a request according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of another apparatus for processing a request according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein.
First, terms of art related to the present application will be described:
(1) load Balancer (NLB): also known as a four-layer load balancer or L4LB, operates at the fourth layer of the Open System Interconnection (OSI) model. The method can be connected with a plurality of servers, and after the request is received, a stream hash algorithm is adopted to determine which server in the plurality of servers the request needs to be forwarded to, so as to realize load balancing among the servers connected with the request.
(2) A Loopback adapter (LO) network card is a special network interface located in a server, is not connected to any actual device, and is used for realizing communication between processes in the same server. The LO network card may be configured with multiple IP addresses. It should be noted that a plurality of addresses may be configured on one LO network card.
(3) Routing table: also known as a routing domain information base, is a spreadsheet or class database stored in a router or networked computer. The routing table stores paths that point to specific network addresses.
Referring now to fig. 1, a block diagram of a system architecture for handling requests is provided according to an embodiment of the present application. It should be understood that the embodiments of the present application are not limited to the system shown in fig. 1, and the apparatus in fig. 1 may be hardware, software divided from functions, or a combination of the two. As shown in fig. 1, a system for processing a request provided in an embodiment of the present application includes a control plane, and a first management server and a second management server deployed by the control plane, where it should be noted that a plurality of management servers may be deployed in the control plane, and the number of management servers deployed by the control plane is not specifically limited in the present application, and in fig. 1, a description is given by taking an example in which the first management server and the second management server are deployed by the control plane. The system further comprises a data plane and a plurality of service servers deployed by the data plane, and the number of the service servers deployed by the data plane is not particularly limited in the present application. The system also comprises a load balancer used for realizing load balancing among the management servers of the control plane deployment. A plurality of management servers of the control plane are connected to the load balancer. Alternatively, the load balancer may be a separate server or processor.
When a plurality of servers (including a management server and a service server) included in the system shown in fig. 1 need to request resources, obtain system information (for example, how many servers are included in the system or a load condition of each server in the system), or obtain specific service information (for example, a service that can be provided by the system, a name of the service, or how to use the service, etc.), the server sends a request to a load balancer, that is, a destination address in the request is an address of the load balancer. After receiving the request, the load balancer can determine which management server to process the request according to the load condition of each management server connected with the load balancer. After determining the management server (taking the determined management server as the first management server as an example), the load balancer may modify the destination address of the request to the address of the first management server and send the request to the first management server.
Optionally, the first management server receives and processes the request and returns response information. For example, in order to request a web access service, the first management server may include an address that can provide the service in response information and send the response information to the load balancer. After receiving the response information, the load balancer sends the response information to the server corresponding to the source address of the request (i.e. the server sending the request, which is subsequently called as the source server), and after receiving the corresponding information, the source server can obtain the corresponding service from the corresponding address according to the indication of the response information.
In some cases, the load balancer, after receiving a request from the first management server (i.e., the source address of the request is the address of the first management server), forwards the request back to the first management server if it is determined that the first management server of the connected plurality of management servers is least loaded. In the related art, when a first management server sends a request to a load balancer, a source address and a source port number in the request are addresses and port numbers of the first management server, a destination address and a destination port number in the request are addresses and port numbers of the load balancer, and after the first management server sends the request to the load balancer, the first management server needs to receive first reply information from the load balancer (that is, the source address and the source port number of the first reply information are addresses and port numbers of the load balancer, and the destination address and the destination port number are addresses and port numbers of the first management server) to establish a Transmission Control Protocol (TCP) connection. Similarly, after receiving the request forwarded by the load balancer, the first management server also generates a reply message, which is referred to as a second reply message, and since the first management server finds that the source address and the source port number in the request are both the address and the port number of the first management server, the second reply message is sent to the first management server, and the source address and the source port number in the second reply message are both the address and the port number of the first management server, but the first management server does not need to receive the first reply message, the first management server cannot successfully establish the TCP connection with the load balancer. So that the request issued by the first management server is not processed.
The embodiment of the application provides a method and a device for processing requests, and the requests generated by a management server in a system are processed by a local computer without being sent out by modifying configuration parameters of the management server, so that the requests can be processed.
Referring to fig. 2, a flowchart of a method for processing a request according to an embodiment of the present application is provided. The method may be applied to any one of a plurality of management servers deployed in a control plane, and for convenience of description, the method performed by the first management server is taken as an example for description. The method specifically comprises the following steps:
the first management server generates a first request 201.
Wherein the first request may be for requesting the control plane to provide a first control service. For example, the first request may be that the first management server requests some memory resources from the control plane, or the first request may also be that the first management server requests the management plane to provide a web browsing function. The destination address in the first request is the address of the load balancer.
As an example, taking a cluster in which the first management server is deployed as a kubernets (k8s) cluster as an example, both a receive-address parameter (address-address) and a bind-address parameter (bind-address) included in the configuration parameters of the first management server may be configured as addresses of the load balancers, so that the control service address of the first management server is an address of the load balancer.
For example, one can set in kube-apiser. yaml:
--bind-address=${NLB_IP}
--advertise-address=${NLB_IP}
202, the first management server determines that the address of the LO network card configuration deployed by the first management server includes the address of the load balancer.
Optionally, after the first management server generates the request, the first management server may perform addressing and routing in the routing table according to a destination address in the request, and when the first management server performs addressing, it first seeks an address configured by each network card inside the first management server, and after it is determined that the first management server does not include the destination address of the first request, it will address an external address. As is known, the LO network card of the first management server may be configured with a plurality of addresses, and optionally, an address of the load balancer may be added to an address included in the LO network card of the first management server included in the routing table of the first management server, that is, a request addressed to the address of the load balancer is indicated in the routing table and is sent to the LO network card of the first management server. For example, it is possible to add, in the routing table of the first management server: "IP addr add $ { NLB _ IP }/32 devlo".
Therefore, when the first management server addresses according to the routing table, it is queried that the LO network card includes the destination address (i.e., the address of the load balancer) in the first request, that is, it may be determined that the LO network card of the first management server is the destination.
And 203, providing a first control service according to the first request.
For example, the first request is for requesting a 2GB storage space for the first management server, and then the first management server may allocate the 2GB storage space for itself after determining that a destination address of the first request is an address of its LO network card. Optionally, after completing the first request, the first management server may further generate first response information, where the first response information may include a storage path of the allocated storage space.
Based on the scheme, the request generated by the management server of the control plane is processed by the management server and cannot be sent out, the request cannot be forwarded through the load balancer, and the problem that the request generated by some management servers cannot be processed is solved.
In some embodiments, the first management server may be further configured to receive requests from other servers to provide control services for the other servers. Alternatively, the request received by the first management server may be from a service server included in the data plane in the system, or may be from another management server of the control plane, for example, from the second management server. The request received by the first management server is referred to as a second request, and the second request is from the second management server. It should be noted that the content requested by the second request may be the same as or different from the content requested by the first request. Alternatively, the second request may be that the second management server requests the control plane to provide a second control service, for example the second control service may be that the second management server is provided with some resources, or that the second management server is provided with some functions, such as a web page access function.
In some cases, since the first management server is connected with the load balancer, the second request may be issued by the second management server and forwarded by the load balancer to the first management server. Optionally, when the second management server generates the second request, the destination address included in the second request may be an address of the load balancer. After receiving the second request, the load balancer may determine a load condition of the management server connected to the load balancer, and determine that the load of the first management server is minimum, so that the destination address in the second request may be modified by the address of the first management server, and then the second request may be sent to the first management server. Further, the first management server may modify a destination address in the second request to an address of the load balancer when receiving the second request. The reason why the first management server modifies the destination address of the second request into the address of the load balancer is that since the bind-address in the configuration parameters of the first management server has been modified into the address of the load balancer, the first management server can only process requests whose destination address is the load balancer, and therefore the first management server needs to modify the address of the second request into the address of the load balancer. After the conversion, the first management server can proceed to process the second request.
Alternatively, the first management server may modify the destination address in the following manner. Five rule chains included in iptables of the first management server are known, namely a FORWARD chain, a forwarding chain, a post chain, an INPUT chain, an OUTPUT chain, and an OUTPUT chain. In some embodiments, a Destination Address Translation (DNAT) rule may be configured on the forwarding chain provided in the present application, and is configured to translate Destination addresses of requests received by the first management server into addresses of the load balancer. For example, the PREROUTING chain may be configured as follows:
iptables-w-t nat-A PREROUTING-d${LOCAL_IP}/32-p tcp-m tcp--dport${TARGET_PORT}-j DNAT--to-destination${NLB_IP}:${TARGET_PORT}。
therefore, when the first management server receives the request, the first management server performs the conversion of the destination address according to the rule configured on the forwarding chain, and converts the destination address into the address of the load balancer.
Optionally, in order to ensure that the destination address of the request generated by the first management server is the address of the load balancer, an embodiment of the present application further provides that a DNAT rule is configured on the OUTPUT link of the first management server, and the destination address of the request generated by the first management server is converted into the address of the load balancer. For example, the OUTPUT chain may be configured as follows:
iptables-w-tnat-AOUTPUT-d $ { IP of TARGET server }/32-p tcp-m tcp-dport 6443-j DNAT-to-destination $ { NLB _ IP }: TARGET _ PORT }.
In order to further understand the solution proposed in the present application, the solution proposed in the present application for processing a request will be described in detail with reference to specific scenarios. In the following description, the description will be made in conjunction with each network protocol layer in the server, and optionally, the server related to the present application may use a seven-layer network protocol of OSI, may also use a four-layer network protocol, or may also use a five-layer network protocol. The server referred to in this application is subsequently introduced using a four-layer network protocol. The four-layer network protocol includes: an application layer, a transport layer, a network layer, and a physical layer (which may alternatively be referred to as a network interface layer or a data link layer, where the network card is located).
Scene one: a request scenario is sent.
In scenario one, the first management server generates the request as an example. First, a process of the first management server generates a first request, and subsequently, the process is referred to as a requesting process in the first management server. For the description of the related content of the first request, reference may be made to the description in the foregoing embodiments, and details are not described here again.
Optionally, the process of requesting the process to generate the first request may include: the application layer generates a first request and packages the first request into a data packet. Since the address-address and bind-address parameters in the first management server are configured (see the related description of step 201 in fig. 2 for specific configuration), the destination address included in the first request generated by the application layer is the address of the load balancer. And the transport layer establishes end-to-end connection according to the destination address, the destination port, the source address and the source port in the first request. Further, the network layer performs addressing and routing in the routing table of the first management server according to the destination address in the first request, and specific rules of addressing and routing may refer to the related description in step 202 of fig. 2, which is not described herein again. The network layer may determine that the destination of the first request is the LO network card of the first management server after addressing according to the routing table. Still further, the physical layer may encapsulate the first request into a data frame, and then may establish a connection with the LO network card, for example, a Transmission Control Protocol (TCP) connection. Since the known network card is located in the physical layer, the step of establishing the connection with the LO network card may also be regarded as establishing the connection between the current network card and the LO network card, and the current network card may be an eth0 network card as an example. After the connection is established, the physical layer may send a first request encapsulated into a data frame to the LO network card.
In some embodiments, the LO network card may forward the request to another process of the first management server for processing after receiving the first request, for example, the process may be referred to as a service process, and each service in the server may be identified by a port number, such as may be referred to as a service number. After the service process processes the first request, the processing result may be carried in the first response information and returned to the LO network card, and after receiving the first response information, the LO network card may return the first response information to the eth0 network card, and the eth0 network card returns the first response information to the request process.
Next, referring to fig. 3, a flow of sending a request in a scenario is described in a specific embodiment, which specifically includes:
301, a requesting process generates a first request.
Optionally, the first request may be generated for the application layer, which is specifically referred to the description in the foregoing embodiment, and is not described herein again.
Wherein, the destination address of the first request is the address of the load balancer. The address of the load balancer is added to the address of the LO network card.
The requesting process sends 302 a first request to the LO network card.
Optionally, the destination address of the first request may be determined by the transport layer and the network layer according to the routing table as the address of the LO network card, and then the first request may be sent to the LO network card through the physical layer, for example, the first request may be forwarded to the LO network card through the eth0 network card.
303, the LO network card receives the first request and forwards the first request to the service process.
The service process processes 304 the first request and generates first response information.
As an example, the first request is for requesting to provide a web browsing function, and after receiving the request, the service may generate the first response information according to a service name and a service address for providing the web browsing function.
The service process sends 305 a first response message to the LO network card.
The LO network card forwards the first response message to the requesting process 306.
Optionally, the LO network card may forward the first response information to the eth0 network card, and then the eth0 network card further sends the first response information to a network layer, a transmission layer, and the like.
Scene two: a request scenario is received.
In scenario two, the first management server receives the request as an example to be described. For convenience of description, the request received by the first management server is subsequently simply referred to as the second request. Optionally, the second request may be from another management server of the control plane, and may be from a management server of the data plane, which is not specifically limited in this application, and in the following description, the second request is described as being from a service server. The destination address of the second request generated by the service server is an address of the load balancer (for specific description of parameter configuration, refer to the related description of step 201 in fig. 2, and will not be described herein again). And after the service server generates a second request, sending the second request to the load balancer. Further, after receiving the second request, the load balancer may determine which management server needs to send the second request to according to the load of each management server connected to the load balancer. Optionally, when the load balancer determines that the load of the first management server is minimum, the destination address of the second request is modified to the address of the first management server, and then the second request is forwarded to the first management server.
After the first management server receives the second request, for example, a certain process of the first management server receives the second request, the physical layer receives the second request, decapsulates the second request, and then transmits the second request to the network layer, an address translation rule is configured on a forwarding chain in the network layer, and a destination address of the received second request is modified to an address of the load balancer. Since the address of the load balancer is already added to the LO network card of the first management server, the network layer determines that the destination of the second request is the LO network card when addressing according to the routing table, and then sends the second request to the LO network card. After receiving the second request, the LO network card may send the second request to any process of the first management server, which is referred to as a service process. And after receiving the link request of the second request, the service process responds to the request, establishes communication connection with the service server, processes the second request to generate second response information, and returns the second response information to the service server.
Specifically, referring to fig. 4, the present application provides a specific embodiment to introduce a flow of receiving a request in scenario two, which specifically includes:
the traffic server generates 401 a second request.
And the destination address included in the second request is the address of the load balancer. The content of the second request may refer to the description in the above embodiments, and will not be described herein.
The traffic server sends 402 a second request to the load balancer.
The load balancer determines that the load of the first management server is minimum 403.
Optionally, after determining that the load of the first management server in the management servers connected to the load balancer is minimum, the load balancer may further modify the destination address of the second request to the address of the first management server.
The load balancer sends 404 a second request to the PREROUTING chain of the first management server.
The PREROUTING chain modifies the destination address of the second request to the address of the load balancer 405.
Wherein, the address translation rule is configured on the forwarding chain, and the destination address of the received second request is modified to the address of the load balancer. The address of the load balancer has been added on the LO network card of the first management server.
The PREROUTING chain sends 406 a second request to the LO network card.
The LO network card receives the second request and forwards the second request to the service process 407.
The service process processes 408 the second request to generate second response information.
The specific processing procedure can be seen from the related description of step 304 in fig. 3.
409, the service process sends the second response information to the LO network card.
The LO network card sends 410 the second response information to the load balancer.
And 411, the load balancer sends the second response information to the service server.
In some cases, during the creation of the system, the load balancer is generally created first, and then the management servers connected to the load balancer are created. For example, a first management server connected to a load balancer is created first, and in the process of creating a second management server, when the second management server needs a request service, since the second management server and the first management server are both connected to the load balancer, the second management server can only send a request to the load balancer, and half of the requests forwarded by the load balancer may be forwarded to the second management server. At this time, the second management server is not yet created, and therefore cannot provide the service. Based on this, the embodiment of the present application provides a solution, in the process of creating each management server in the system, a management server that is not created is not connected to the load balancer first, and is connected to the load balancer after creation is completed. Continuing the above example, when the second management server is not created, the request sent to the load balancer can only be forwarded to the first management server for processing, and will not be forwarded back to itself, thereby solving the problem that the service cannot be provided because the second management server is not created.
Based on the same concept as the method described above, referring to fig. 5, a schematic structural diagram of an apparatus 500 for processing a request provided in an embodiment of the present application is shown. The apparatus 500 is capable of performing the various steps of the above-described method, and will not be described in detail herein to avoid repetition. The apparatus 500 comprises: a request module 501, a processing module 502 and a receiving module 503.
A request module 501, configured to generate a first request, where a destination address in the first request is an address of the load balancer, and the first request is used to request the control plane to provide a first control service;
a processing module 502, configured to provide the first control service according to the first request when it is determined that the address of the LO network card configuration deployed by the first management server includes the address of the load balancer.
In some embodiments, the apparatus further comprises a receiving module 503:
the receiving module 503 is configured to receive a second request, where a destination address in the second request is an address of the first management server, and the second request is used to request the control plane to provide a second control service;
the processing module 502 is further configured to update a destination address in the second request to an address of the load balancer;
the processing module 502 is further configured to provide the second control service according to the second request when it is determined that the address of the LO network card configuration deployed by the first management server includes the address of the load balancer.
In some embodiments, the second request is from a second management server or a business server; the second management server is any one of the plurality of management servers except the first management server.
In some embodiments, the processing module 502, before determining that the address of the LO network card configuration deployed by the first management server includes the address of the load balancer, is further configured to:
and querying a routing table according to the address of the load balancer, and determining that the behavior indicated by the address of the load balancer in the routing table is to send the first request to the LO network card.
In some embodiments, the processing module 502 is specifically configured to:
and querying the routing table, and determining that the behavior indicated by the address of the first management server in the routing table is replaced by the address of the load balancer.
The embodiment of the present application further provides another apparatus 600 for implementing a request processing method, as shown in fig. 6, including: a memory 601 and a processor 602. Optionally, a communication interface 603 may also be included in the apparatus 600. The apparatus 600 may communicate with other devices through the communication interface 603, such as sending and receiving instructions or sending and receiving requests, and the communication interface 603 may be used to implement the functions that can be implemented by the receiving module in fig. 5. A memory 601 for storing program instructions. A processor 602, configured to call the program instructions stored in the memory 601, and execute any one of the methods proposed in the above embodiments according to the obtained program. For example, processor 602 may be used to implement the functionality implemented by the processing modules and request modules of FIG. 5 described above.
In the embodiment of the present application, a specific connection medium among the memory 601, the processor 602, and the communication interface 603 is not limited, for example, a bus may be divided into an address bus, a data bus, a control bus, and the like.
In the embodiments of the present application, the processor may be a general-purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, and may implement or execute the methods, steps, and logic blocks disclosed in the embodiments of the present application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor.
In the embodiment of the present application, the memory may be a nonvolatile memory, such as a Hard Disk Drive (HDD) or a solID-state drive (SSD), and may also be a volatile memory (RAM), for example, a random-access memory (RAM). The memory can also be, but is not limited to, any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory in the embodiments of the present application may also be circuitry or any other device capable of performing a storage function for storing program instructions and/or data.
Embodiments of the present application also provide a computer-readable storage medium, which includes program code for causing a computer to perform the steps of the method provided by the embodiments of the present application when the program code runs on the computer.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A method for processing a request is applied to a first management server, the first management server is any one of a plurality of management servers deployed by a control plane in a cloud network system, and the plurality of management servers are respectively connected with a load balancer, and the method comprises the following steps:
the first management server generates a first request, wherein a destination address in the first request is an address of the load balancer, and the first request is used for requesting the control plane to provide a first control service;
and when the first management server determines that the address of the LO network card configuration deployed by the first management server comprises the address of the load balancer, providing the first control service according to the first request.
2. The method of claim 1, wherein the method further comprises:
the first management server receives a second request, wherein a destination address in the second request is an address of the first management server, and the second request is used for requesting the control plane to provide a second control service;
the first management server updates the destination address in the second request to the address of the load balancer;
and when the first management server determines that the address of the LO network card configuration deployed by the first management server comprises the address of the load balancer, providing the second control service according to the second request.
3. The method of claim 1 or 2, wherein the second request is from a second management server or a traffic server; the second management server is any one of the plurality of management servers except the first management server.
4. The method of claim 1 or 2, wherein before the first management server determines that the address of the LO network card configuration deployed by the first management server includes the address of the load balancer, the method further comprises:
and the first management server inquires a routing table according to the address of the load balancer, and determines that the behavior indicated by the address of the load balancer in the routing table is to send the first request to the LO network card.
5. The method of claim 2, wherein the first management server updating the destination address in the second request to the address of the load balancer comprises:
the first management server inquires the routing table, and determines that the behavior indicated by the address of the first management server in the routing table is replaced by the address of the load balancer.
6. An apparatus for processing a request, wherein the apparatus is applied to a first management server, or the apparatus is the first management server, the first management server is any one of a plurality of management servers deployed by a control plane in a cloud network system, and the plurality of management servers are respectively connected with a load balancer, the apparatus comprising:
a request module, configured to generate a first request, where a destination address in the first request is an address of the load balancer, and the first request is used to request the control plane to provide a first control service;
and the processing module is used for providing the first control service according to the first request when determining that the address of the LO network card configuration deployed by the first management server comprises the address of the load balancer.
7. The apparatus of claim 6, wherein the apparatus further comprises a receiving module:
the receiving module is configured to receive a second request, where a destination address in the second request is an address of the first management server, and the second request is used to request the control plane to provide a second control service;
the processing module is further configured to update a destination address in the second request to an address of the load balancer;
the processing module is further configured to provide the second control service according to the second request when it is determined that the address of the LO network card configuration deployed by the first management server includes the address of the load balancer.
8. The apparatus of claim 6 or 7, wherein the second request is from a second management server or a traffic server; the second management server is any one of the plurality of management servers except the first management server.
9. The apparatus of claim 6 or 7, wherein the processing module, prior to determining that the address of the LO network card configuration deployed by the first management server comprises the address of the load balancer, is further to:
and querying a routing table according to the address of the load balancer, and determining that the behavior indicated by the address of the load balancer in the routing table is to send the first request to the LO network card.
10. The apparatus of claim 7, wherein the processing module is specifically configured to:
and querying the routing table, and determining that the behavior indicated by the address of the first management server in the routing table is replaced by the address of the load balancer.
CN202111152557.3A 2021-09-29 2021-09-29 Method and device for processing request Active CN113918326B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111152557.3A CN113918326B (en) 2021-09-29 2021-09-29 Method and device for processing request

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111152557.3A CN113918326B (en) 2021-09-29 2021-09-29 Method and device for processing request

Publications (2)

Publication Number Publication Date
CN113918326A true CN113918326A (en) 2022-01-11
CN113918326B CN113918326B (en) 2024-07-16

Family

ID=79237101

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111152557.3A Active CN113918326B (en) 2021-09-29 2021-09-29 Method and device for processing request

Country Status (1)

Country Link
CN (1) CN113918326B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115174581A (en) * 2022-07-06 2022-10-11 即刻雾联科技(北京)有限公司 Load balancing method and router

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040267920A1 (en) * 2003-06-30 2004-12-30 Aamer Hydrie Flexible network load balancing
CN106953795A (en) * 2016-01-07 2017-07-14 中兴通讯股份有限公司 Configure the method and device of many network interface cards
CN109936635A (en) * 2019-03-12 2019-06-25 北京百度网讯科技有限公司 Load-balancing method and device
CN111866064A (en) * 2016-12-29 2020-10-30 华为技术有限公司 Load balancing method, device and system
CN111970362A (en) * 2020-08-17 2020-11-20 上海势航网络科技有限公司 Vehicle networking gateway clustering method and system based on LVS
CN112738548A (en) * 2021-04-06 2021-04-30 北京百家视联科技有限公司 Streaming media scheduling method, device, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040267920A1 (en) * 2003-06-30 2004-12-30 Aamer Hydrie Flexible network load balancing
CN106953795A (en) * 2016-01-07 2017-07-14 中兴通讯股份有限公司 Configure the method and device of many network interface cards
CN111866064A (en) * 2016-12-29 2020-10-30 华为技术有限公司 Load balancing method, device and system
CN109936635A (en) * 2019-03-12 2019-06-25 北京百度网讯科技有限公司 Load-balancing method and device
CN111970362A (en) * 2020-08-17 2020-11-20 上海势航网络科技有限公司 Vehicle networking gateway clustering method and system based on LVS
CN112738548A (en) * 2021-04-06 2021-04-30 北京百家视联科技有限公司 Streaming media scheduling method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张俊虎;邢永中;: "网络负载均衡的控制理论及实践战略", 通信技术, no. 12, 10 December 2009 (2009-12-10) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115174581A (en) * 2022-07-06 2022-10-11 即刻雾联科技(北京)有限公司 Load balancing method and router
CN115174581B (en) * 2022-07-06 2023-04-07 即刻雾联科技(北京)有限公司 Load balancing method and router

Also Published As

Publication number Publication date
CN113918326B (en) 2024-07-16

Similar Documents

Publication Publication Date Title
CN112470436B (en) Systems, methods, and computer-readable media for providing multi-cloud connectivity
CN110198363B (en) Method, device and system for selecting mobile edge computing node
WO2020228469A1 (en) Method, apparatus and system for selecting mobile edge computing node
Qi et al. Assessing container network interface plugins: Functionality, performance, and scalability
CN104219127B (en) A kind of creation method and equipment of virtual network example
EP2750343B1 (en) Dynamic network device processing using external components
US20240345988A1 (en) Message forwarding method and apparatus based on remote direct data storage, and network card and device
CN108200165B (en) Request Transmission system, method, apparatus and storage medium
US11800587B2 (en) Method for establishing subflow of multipath connection, apparatus, and system
CN114374634A (en) Message forwarding method and network equipment
CN111193773A (en) Load balancing method, device, equipment and storage medium
CN112087382B (en) Service routing method and device
CN113364660B (en) Data packet processing method and device in LVS load balancing
CN116633934A (en) Load balancing method, device, node and storage medium
WO2015043679A1 (en) Moving stateful applications
CN113918326B (en) Method and device for processing request
CN114125983A (en) Routing method, session management entity, system and medium for mobile network user plane
CN109413224A (en) Message forwarding method and device
CN113010314A (en) Load balancing method and device and electronic equipment
CN114980359B (en) Data forwarding method, device, equipment, system and storage medium
CN116599900A (en) Cloud environment access method and device
CN114374666A (en) Message forwarding method and device, electronic equipment and storage medium
CN114024971A (en) Service data processing method, Kubernetes cluster and medium
CN112887185A (en) Communication method and device of overlay network
CN115174581B (en) Load balancing method and router

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant