CN117376356A - Load balancing scheduling method and system for Kubernetes cluster - Google Patents

Load balancing scheduling method and system for Kubernetes cluster Download PDF

Info

Publication number
CN117376356A
CN117376356A CN202311603839.XA CN202311603839A CN117376356A CN 117376356 A CN117376356 A CN 117376356A CN 202311603839 A CN202311603839 A CN 202311603839A CN 117376356 A CN117376356 A CN 117376356A
Authority
CN
China
Prior art keywords
node
ring
data traffic
controller
load balancing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311603839.XA
Other languages
Chinese (zh)
Inventor
魏海宇
高玉亚
刘庆林
吕宗辉
陈健
李小琼
杨帆
谢辉
杨晓峰
刘海洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zorelworld Information Technology Co ltd
Original Assignee
Beijing Zorelworld Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zorelworld Information Technology Co ltd filed Critical Beijing Zorelworld Information Technology Co ltd
Priority to CN202311603839.XA priority Critical patent/CN117376356A/en
Publication of CN117376356A publication Critical patent/CN117376356A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application discloses a load balancing scheduling method and system of a Kubernetes cluster. Firstly, establishing a load balancer controller by using golang, interacting with an API of a Kubernetes cluster to be scheduled, establishing an interaction relation, setting an announces field of a Service object in the Kubernetes cluster as the name of the load balancer controller, and carrying out load balancing scheduling by the load balancer controller. The method and the device can effectively avoid single-point faults and performance bottleneck problems of the traditional load balancing algorithm.

Description

Load balancing scheduling method and system for Kubernetes cluster
Technical Field
The invention relates to the technical field of communication, in particular to a load balancing scheduling method and system of a Kubernetes cluster.
Background
Kubernetes is an open-source container orchestration system that aims to simplify the deployment, expansion, and management of container applications. Kubernetes provides a platform that can automatically deploy, expand and manage containerized applications, support multiple cloud platforms and physical server clusters, and can easily manage thousands of containers.
Kubernetes supports a variety of load balancing algorithms, which may be selected as appropriate depending on the needs and scenario of the application. The load balancing algorithm supported by the current Kubernetes default comprises several types of polling, minimum connection, IP hashing and weighted polling, and in a complex scene, the default balancing algorithm can not well meet the service requirement, and the traditional load balancing algorithm generally has single-point fault and performance bottleneck problems.
Disclosure of Invention
Based on the above, the embodiment of the application provides a load balancing scheduling method and system of a Kubernetes cluster, which can effectively avoid single-point faults and performance bottleneck problems of a traditional load balancing algorithm.
In a first aspect, a load balancing scheduling method of a Kubernetes cluster is provided, and the method includes:
establishing a load equalizer controller by using golang, and interacting with the API of the Kubernetes cluster to be scheduled to establish an interaction relationship; wherein the interactive relationship at least comprises the operations of creating, updating and deleting the processing load equalizer;
and setting an actions field of a Service object as a name of a load balancer controller in the Kubernetes cluster, and carrying out load balancing scheduling through the load balancer controller.
Optionally, using golang to build a load balancer controller, and interacting with an API of a Kubernetes cluster to be scheduled, to build an interaction relationship, including: the controller is deployed using a Deployment resource object.
Optionally, setting an actions field of a Service object as a name of a load balancer controller in the Kubernetes cluster includes:
acquiring a Service object and related Endpoint objects;
calculating a target port and a target IP address list of the load balancer controller;
and selecting a target IP address according to the consistent hash load balancing algorithm, and routing the traffic to the target IP address.
Optionally, the selecting the target IP address according to the consistent hash load balancing algorithm specifically includes:
constructing an identification code by the pod according to a preset rule, then modulo 2A 32 by the identification code, determining that the service is mapped into an annular space in a Hash value interval, and mapping the node IP address to the position on the ring;
when a request for distributing data traffic to a node is initiated, mapping parameters attached to the data traffic to a position on a ring through a constructed identification code, then finding a node nearest to the position clockwise, and distributing the data traffic to the node;
when a new node is added, the position of the new node needs to be found on the ring and inserted into the proper position; when a node leaves, it needs to be removed from the ring and the data traffic associated with that node is redistributed to other nodes.
Optionally, after distributing the data traffic to the node, the method further comprises:
when a new node is added, the position of the new node is found on the ring and is inserted into the corresponding position;
when a node leaves, it is removed from the ring and the data traffic associated with that node is redistributed to other nodes.
In a second aspect, a load balancing scheduling system of a Kubernetes cluster is provided, the system comprising:
the interaction module is used for establishing a load equalizer controller by using golang and interacting with the API of the Kubernetes cluster to be scheduled to establish an interaction relationship; wherein the interactive relationship at least comprises the operations of creating, updating and deleting the processing load equalizer;
and the scheduling module is used for setting an attach field of a Service object as the name of a load balancer controller in the Kubernetes cluster and carrying out load balancing scheduling through the load balancer controller.
Optionally, the interaction module establishes a load equalizer controller by using golang, and interacts with an API of a Kubernetes cluster to be scheduled, and establishes an interaction relationship, including: the controller is deployed using a Deployment resource object.
Optionally, the scheduling module sets an actions field of a Service object to a name of a load balancer controller in the Kubernetes cluster, including:
acquiring a Service object and related Endpoint objects;
calculating a target port and a target IP address list of the load balancer controller;
and selecting a target IP address according to the consistent hash load balancing algorithm, and routing the traffic to the target IP address.
Optionally, the selecting the target IP address according to the consistent hash load balancing algorithm specifically includes:
constructing an identification code by the pod according to a preset rule, then modulo 2A 32 by the identification code, determining that the service is mapped into an annular space in a Hash value interval, and mapping the node IP address to the position on the ring;
when a request for distributing data traffic to a node is initiated, mapping parameters attached to the data traffic to a position on a ring through a constructed identification code, then finding a node nearest to the position clockwise, and distributing the data traffic to the node;
when a new node is added, the position of the new node needs to be found on the ring and inserted into the proper position; when a node leaves, it needs to be removed from the ring and the data traffic associated with that node is redistributed to other nodes.
Optionally, after distributing the data traffic to the node, the system further comprises:
when a new node is added, the position of the new node is found on the ring and is inserted into the corresponding position;
when a node leaves, it is removed from the ring and the data traffic associated with that node is redistributed to other nodes.
The beneficial effects that technical scheme that this application embodiment provided include at least:
reducing traditional hash collisions: in Kubernetes, the number of nodes may change at any time, and the up-and-down lines of the nodes may affect the load balancing algorithm, so that load balancing is uneven. The use of a consistent hashing algorithm can avoid this, since a large hash collision occurs only when the number of nodes changes significantly.
The load is more balanced: the consistency hash algorithm can uniformly distribute the requests to each node according to the load condition of the nodes, and avoids the condition that the performance is reduced due to overhigh load of some nodes.
High availability: the consistent hashing algorithm may ensure that even if a node fails, the overall load balancing performance is not affected because the requests may be distributed to other available nodes.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It will be apparent to those skilled in the art from this disclosure that the drawings described below are merely exemplary and that other embodiments may be derived from the drawings provided without undue effort.
Fig. 1 is a flowchart of a load balancing scheduling method of a Kubernetes cluster provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of an initial mapping relationship in an embodiment of the present application;
FIG. 3 is a schematic diagram of mapping relationship after obtaining a mapping request according to an embodiment of the present application;
fig. 4 is a schematic diagram of mapping relationship after a service Pod is added in the embodiment of the present application;
fig. 5 is a schematic diagram of mapping relationship after Pod deletion in the embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
In the description of the present invention, the terms "comprises," "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements but may include other steps or elements not expressly listed but inherent to such process, method, article, or apparatus or steps or elements added based on further optimization of the inventive concept.
The scheme uses a built-in mechanism of the Kubernetes to self-define a load balancer, specifically, a load balancing controller which uses golang to realize consistent hash is deployed into a Kubernetes cluster, and then an actions field of a Service object is set as the name of the controller. Thus, when Kubernetes needs to create a load balancer for Service objects, custom controllers will be invoked. Specifically, please refer to fig. 1, which illustrates a flowchart of a load balancing scheduling method of Kubernetes clusters provided in an embodiment of the present application, the method may include the following steps:
and step 101, establishing a load equalizer controller by using golang, and interacting with the API of the Kubernetes cluster to be scheduled to establish an interaction relationship.
Wherein the interaction relationship comprises at least the operations of creating, updating and deleting the processing load equalizer.
In the embodiment of the application, kubernetes supports various load balancing algorithms, and a proper algorithm can be selected according to the application requirements and scenes. The following are several load balancing algorithms supported by Kubernetes:
polling: the request is sent to each back-end Pod in turn, and from the first Pod, the request is sent in a loop in turn. This algorithm is simple but cannot take into account the load situation of Pod.
Minimum number of connections: and sending the request to the back end Pod with the minimum current connection number so as to ensure the balanced distribution of the request and avoid the service performance reduction caused by the overhigh connection number of a certain Pod.
IP hashing: and calculating a hash value according to the IP address of the client, and sending the request with the same hash value to the same back-end Pod. The algorithm can ensure that the requests of the same client are always sent to the same Pod, and is suitable for scenes needing to keep the session state.
Weighted polling: according to the weight value allocation request of the back-end Pod, the Pod with higher weight will receive more requests. This algorithm is applicable to the scenario where some Pod needs to process more requests, such as higher configuration, better performance, etc. of some Pod.
In Kubernetes, these algorithms are called the SessionAffinity attribute of the Service, which can be configured in the Service resource.
In this step, to implement the load balancer controller, the load balancer controller is implemented using golang specifically, which can interact with the Kubernetes API and handle the creation, update, and deletion operations of the load balancer. In the controller, the following steps are processed:
(1) Acquiring a Service object and related Endpoint objects;
(2) Calculating a target port and a target IP address list of the load balancer;
(3) And selecting a target IP address according to the consistent hash load balancing algorithm, and routing the traffic to the address.
The specific implementation process of the consistent hash comprises the following steps:
a. mapping service:
constructing a specific identification code by pod according to a certain rule, then taking a modulus of 2A & lt 32 & gt by the identification code, and determining the position corresponding to the Hash value interval of the service. Suppose there are three services Pod1, pod2, pod3, and the mapping relationship is as the initial mapping relationship diagram of fig. 2.
b. Mapping request, positioning service:
when a request is initiated, the parameters attached to the request are used for determining the service specifically called, and the following mapping relationship is shown in the schematic diagram of the mapping relationship after the request R1, R2 and R3 are assumed, and after a series of operations of calculating a specific identification code and taking the remainder are performed on the parameters, the mapping relationship is shown in fig. 3:
as can be seen from FIG. 3, the R1 request maps in the middle of 0-Pod1, the R2 request maps in the middle of Pode1-Pod2, and the R3 request maps in the middle of Pod2-Pod 3. The first Pod with the Hash value larger than the request Hash value is taken as the actual calling service. That is, the R1 request will call Pod1, the R2 request will call Pod2, and the R3 request will call Pod3.
c. Newly adding Pod:
assuming that the newly added service Pod4 maps before Pod3, an original mapping relationship is just destroyed, as shown in fig. 4, a schematic diagram of the mapping relationship after the newly added service Pod is obtained is given.
Thus, request R3 will actually invoke service Pod4, but requests R1, R2 are not affected.
d. Deleting Pod:
assuming service Pod2 is not operational, then the R2 request will map to Pod3: a schematic diagram of the mapping relationship after the Pod deletion is given in fig. 5. The original R1 and R3 requests are not affected. When services are added and deleted, the affected requests are limited. Unlike simple modulo mapping, global mapping relation needs to be adjusted when the service changes.
e. Balance and virtual nodes:
assuming that three services Pod1, pod2 and Pod3 are distributed in the positions after Hash mapping, the ring is exactly cut into equal three parts, and the distribution of the requests is basically balanced. However, this is not stable, and further optimization is performed by using a virtual node, i.e. the virtual node performs Hash mapping on the address of the service itself, and performs some processing on the address of the virtual node, so as to achieve the purpose of mapping multiple nodes by using the same service.
And 102, setting an announcements field of a Service object as a name of a load balancer controller in the Kubernetes cluster, and carrying out load balancing scheduling through the load balancer controller.
After step 102, a load balancer controller is deployed into the Kubernetes cluster. The controller may be deployed using a Deployment resource object. Finally, the actions field of the Service object is set as the name of the load balancer controller. And combining a self-defined load balancing algorithm with the scheduling of the Kubernetes, and carrying out load balancing scheduling through the load balancing controller.
In an alternative embodiment of the present application, in implementing consistent hashing, the following steps are typically required:
determining a hash function: in consistent hashing, a hash function is used to map data keys (e.g., service names, service ports, etc.) into a circular space. Common hash functions include MD5, SHA-1, SHA-256, and the like.
Determining a node position: the locations of nodes on the ring are selected, and typically a hash function may be used to map the node name or node IP address to the locations on the ring.
Data distribution: when data traffic needs to be distributed to a node, the same hash function can be used to map the data key to a location on the ring, then find the node closest to that location clockwise, and distribute the traffic to that node.
Node joining and leaving: when a new node is added, it is necessary to find the location of the new node on the ring and insert it into the appropriate location. When a node leaves, it needs to be removed from the ring and the data traffic associated with that node is redistributed to other nodes.
And writing a load balancer controller by using Golang, acquiring a Service object and an associated Endpoints object by interacting with a Kubernetes API, calculating a target port and a target IP address list according to a consistent hash algorithm, and selecting a target IP address to route traffic to the address. When implementing consistent hashing, existing open source libraries or self-written hash functions and node selection algorithms may be used. It can be seen that consistent hashing is a high load balancing algorithm that can determine the location of a target service based on the key of the request (e.g., URL, request header, etc.). The key of the request is mapped to a fixed range by a hash algorithm, and each service is also mapped to a point within this range. When a new service is added or withdrawn, only a small amount of requests are affected, so that single-point faults and performance bottleneck problems of the traditional load balancing algorithm can be effectively avoided.
The embodiment of the application also provides a load balancing scheduling system of the Kubernetes cluster. The system comprises:
the interaction module is used for establishing a load equalizer controller by using golang and interacting with the API of the Kubernetes cluster to be scheduled to establish an interaction relationship; the interactive relation at least comprises the operations of creating, updating and deleting the processing load equalizer;
and the scheduling module is used for setting the announcements field of the Service object as the name of the load balancer controller in the Kubernetes cluster and carrying out load balancing scheduling through the load balancer controller.
In an alternative embodiment of the present application, the interaction module uses golang to establish a load balancer controller, and interacts with an API of a Kubernetes cluster to be scheduled, to establish an interaction relationship, including: the controller is deployed using a Deployment resource object.
In an alternative embodiment of the present application, the scheduling module sets an actions field of a Service object to a name of a load balancer controller in a Kubernetes cluster, including:
acquiring a Service object and related Endpoint objects;
calculating a target port and a target IP address list of the load balancer controller;
and selecting a target IP address according to the consistent hash load balancing algorithm, and routing the traffic to the target IP address.
In an alternative embodiment of the present application, selecting the target IP address according to a consistent hash load balancing algorithm specifically includes:
constructing an identification code by the pod according to a preset rule, then modulo 2A 32 by the identification code, determining that the service is mapped into an annular space in a Hash value interval, and mapping the node IP address to the position on the ring;
when a request for distributing data traffic to a node is initiated, mapping parameters attached to the data traffic to a position on a ring through a constructed identification code, then finding a node nearest to the position clockwise, and distributing the data traffic to the node;
when a new node is added, the position of the new node needs to be found on the ring and inserted into the proper position; when a node leaves, it needs to be removed from the ring and the data traffic associated with that node is redistributed to other nodes.
In an alternative embodiment of the present application, after distributing the data traffic to the node, the system further comprises:
when a new node is added, the position of the new node is found on the ring and is inserted into the corresponding position;
when a node leaves, it is removed from the ring and the data traffic associated with that node is redistributed to other nodes.
The load balancing scheduling system of the Kubernetes cluster provided in the embodiment of the present application is used to implement the load balancing scheduling method of the Kubernetes cluster, and specific limitation regarding the load balancing scheduling system of the Kubernetes cluster may be referred to the limitation regarding the load balancing scheduling method of the Kubernetes cluster in the foregoing, which is not repeated herein. The various portions of the load balancing scheduling system of the Kubernetes cluster described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or independent of a processor in the device, or may be stored in software in a memory in the device, so that the processor may call and execute operations corresponding to the above modules.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the claims. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (10)

1. The load balancing scheduling method of the Kubernetes cluster is characterized by comprising the following steps of:
establishing a load equalizer controller by using golang, and interacting with the API of the Kubernetes cluster to be scheduled to establish an interaction relationship; wherein the interactive relationship at least comprises the operations of creating, updating and deleting the processing load equalizer;
and setting an actions field of a Service object as a name of a load balancer controller in the Kubernetes cluster, and carrying out load balancing scheduling through the load balancer controller.
2. The method of claim 1, wherein establishing a load balancer controller using golang and interacting with APIs of Kubernetes clusters to be scheduled, establishing an interaction relationship, comprises: the controller is deployed using a Deployment resource object.
3. The method of claim 1, wherein setting an actions field of a Service object to a name of a load balancer controller in the Kubernetes cluster comprises:
acquiring a Service object and related Endpoint objects;
calculating a target port and a target IP address list of the load balancer controller;
and selecting a target IP address according to the consistent hash load balancing algorithm, and routing the traffic to the target IP address.
4. The method according to claim 3, wherein the selecting the target IP address according to the consistent hash load balancing algorithm specifically comprises:
constructing an identification code by the pod according to a preset rule, then modulo 2A 32 by the identification code, determining that the service is mapped into an annular space in a Hash value interval, and mapping the node IP address to the position on the ring;
when a request for distributing data traffic to a node is initiated, mapping parameters attached to the data traffic to a position on a ring through a constructed identification code, then finding a node nearest to the position clockwise, and distributing the data traffic to the node;
when a new node is added, the position of the new node needs to be found on the ring and inserted into the proper position; when a node leaves, it needs to be removed from the ring and the data traffic associated with that node is redistributed to other nodes.
5. The method of claim 4, wherein after distributing data traffic to the node, the method further comprises:
when a new node is added, the position of the new node is found on the ring and is inserted into the corresponding position;
when a node leaves, it is removed from the ring and the data traffic associated with that node is redistributed to other nodes.
6. A load balancing scheduling system of a Kubernetes cluster, the system comprising:
the interaction module is used for establishing a load equalizer controller by using golang and interacting with the API of the Kubernetes cluster to be scheduled to establish an interaction relationship; wherein the interactive relationship at least comprises the operations of creating, updating and deleting the processing load equalizer;
and the scheduling module is used for setting an attach field of a Service object as the name of a load balancer controller in the Kubernetes cluster and carrying out load balancing scheduling through the load balancer controller.
7. The system of claim 6, wherein the interaction module establishes the load balancer controller using golang and interacts with the API of the Kubernetes cluster to be scheduled, the establishing the interaction relationship comprising: the controller is deployed using a Deployment resource object.
8. The system of claim 6, wherein the scheduling module sets an actions field of a Service object to a name of a load balancer controller in the Kubernetes cluster, comprising:
acquiring a Service object and related Endpoint objects;
calculating a target port and a target IP address list of the load balancer controller;
and selecting a target IP address according to the consistent hash load balancing algorithm, and routing the traffic to the target IP address.
9. The system according to claim 8, wherein the selecting the target IP address according to the consistent hash load balancing algorithm specifically comprises:
constructing an identification code by the pod according to a preset rule, then modulo 2A 32 by the identification code, determining that the service is mapped into an annular space in a Hash value interval, and mapping the node IP address to the position on the ring;
when a request for distributing data traffic to a node is initiated, mapping parameters attached to the data traffic to a position on a ring through a constructed identification code, then finding a node nearest to the position clockwise, and distributing the data traffic to the node;
when a new node is added, the position of the new node needs to be found on the ring and inserted into the proper position; when a node leaves, it needs to be removed from the ring and the data traffic associated with that node is redistributed to other nodes.
10. The system of claim 9, wherein after distributing data traffic to the node, the system further comprises:
when a new node is added, the position of the new node is found on the ring and is inserted into the corresponding position;
when a node leaves, it is removed from the ring and the data traffic associated with that node is redistributed to other nodes.
CN202311603839.XA 2023-11-28 2023-11-28 Load balancing scheduling method and system for Kubernetes cluster Pending CN117376356A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311603839.XA CN117376356A (en) 2023-11-28 2023-11-28 Load balancing scheduling method and system for Kubernetes cluster

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311603839.XA CN117376356A (en) 2023-11-28 2023-11-28 Load balancing scheduling method and system for Kubernetes cluster

Publications (1)

Publication Number Publication Date
CN117376356A true CN117376356A (en) 2024-01-09

Family

ID=89406191

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311603839.XA Pending CN117376356A (en) 2023-11-28 2023-11-28 Load balancing scheduling method and system for Kubernetes cluster

Country Status (1)

Country Link
CN (1) CN117376356A (en)

Similar Documents

Publication Publication Date Title
CN112087312B (en) Method, device and equipment for providing edge service
CN109688235B (en) Virtual network method for processing business, device and system, controller, storage medium
CN109391592B (en) Method and equipment for discovering network function service
US20240176672A1 (en) Systems and methods providing serverless dns integration
US10530741B2 (en) Configuration services
CN111857873A (en) Method for realizing cloud native container network
CN109358971B (en) Rapid and load-balancing service function chain deployment method in dynamic network environment
CN110784409B (en) Spring Cloud-based micro-service gray level publishing method
WO2019210580A1 (en) Access request processing method, apparatus, computer device, and storage medium
CN105100267A (en) Deployment apparatus and deployment method for large enterprise private cloud
CN110266822B (en) Shared load balancing implementation method based on nginx
CN112187958A (en) Method and device for registering, discovering and forwarding microservice
CN107172214B (en) Service node discovery method and device with load balancing function
CN115086330A (en) Cross-cluster load balancing system
EP4050850A1 (en) Service upgrading method, device and system
US20210037090A1 (en) Systems and Methods for Server Failover and Load Balancing
CN114911602A (en) Load balancing method, device, equipment and storage medium for server cluster
CN117376356A (en) Load balancing scheduling method and system for Kubernetes cluster
WO2024021471A1 (en) Service updating method, apparatus and system, and storage medium
CN112910796B (en) Traffic management method, apparatus, device, storage medium, and program product
JP6204256B2 (en) Distributed processing system, distributed data management apparatus, and distributed data management method
CN113300866B (en) Node capacity control method, device, system and storage medium
US8990248B1 (en) Peer-to-peer network image distribution hierarchy
You et al. A coordinated algorithm with resource evaluation for service function chain allocation
CN113873052B (en) Domain name resolution method, device and equipment of Kubernetes cluster

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination