CN115086330B - Cross-cluster load balancing system - Google Patents

Cross-cluster load balancing system Download PDF

Info

Publication number
CN115086330B
CN115086330B CN202210674605.3A CN202210674605A CN115086330B CN 115086330 B CN115086330 B CN 115086330B CN 202210674605 A CN202210674605 A CN 202210674605A CN 115086330 B CN115086330 B CN 115086330B
Authority
CN
China
Prior art keywords
configuration
information
route
ingress
load
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210674605.3A
Other languages
Chinese (zh)
Other versions
CN115086330A (en
Inventor
黄德光
薛浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Asiainfo Technologies China Inc
Original Assignee
Asiainfo Technologies China Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Asiainfo Technologies China Inc filed Critical Asiainfo Technologies China Inc
Priority to CN202210674605.3A priority Critical patent/CN115086330B/en
Publication of CN115086330A publication Critical patent/CN115086330A/en
Application granted granted Critical
Publication of CN115086330B publication Critical patent/CN115086330B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/14Routing performance; Theoretical aspects

Abstract

The embodiment of the application provides a cross-cluster load balancing system, and relates to the technical field of cloud protogenesis. The system comprises an entry report controller, a soft load background and a service cluster, wherein the entry report controller is used for acquiring the Ingress configuration information on the service cluster together with a custom resource definition CRD, and reporting the Ingress configuration information to the soft load background according to the configuration of the custom resource; the soft load background is used for synchronizing the information of the Ingress configuration of each service cluster, carrying out merging management on the information of the Ingress configuration of the same domain name, providing a default routing strategy, supporting the soft load foreground to modify a specific service routing strategy, and supporting the routing execution unit to acquire a corresponding routing configuration strategy; the soft load foreground is used for providing a configuration page for soft load carrying maintenance personnel so as to enable the soft load carrying maintenance personnel to carry out specific soft load strategy adjustment; the route execution unit is used for acquiring specific load strategy configuration and completing specific load balancing actions.

Description

Cross-cluster load balancing system
Technical Field
The application relates to the technical field of cloud protogenesis, in particular to a cross-cluster load balancing system.
Background
An implementation manner of single cluster load balancing in a cloud native environment, comprising: (1) Load balancing capability is provided by using the Ingress, such as by deploying an Ingress controller (Ingress Controller), to complete the load balancing function of Pod corresponding to the same service within the cluster. (2) The external LoadBalancer device and the node port mode are used for realizing, for example, in the node port mode, the access to the external service can be provided through the node address of the K8S cluster and the designated port, and load balancing devices are usually required to be deployed outside the cluster to complete load balancing among different nodes.
However, when the cross-K8S cluster deployment is applied, the K8S does not provide load balancing capability to control the load balancing capability between different clusters, and the load balancing policy between the clusters is configured through hardware load balancing, which has the problems that the lower endpoint variation cannot be perceived and all the configurations need manual management.
Disclosure of Invention
The embodiment of the application provides a cross-cluster load balancing system, which can solve the problem of cross-cluster load balancing. The technical proposal is as follows:
according to one aspect of an embodiment of the present application, there is provided a cross-cluster load balancing system, the system including an ingress report controller Ingress Reporter Controller, a soft load background, a soft load foreground, and a route execution unit; wherein,
ingress Reporter Controller is used for acquiring the information of the Ingress configuration on the service cluster together with the Custom Resource definition CRD, and reporting the information of the Ingress configuration to the soft load background according to the configuration of the Custom Resource;
the soft load background is used for synchronizing the information of the Ingress configuration of each service cluster, carrying out merging management on the information of the Ingress configuration of the same domain name, providing a default routing strategy, supporting the soft load foreground to modify a specific service routing strategy, and supporting the routing execution unit to acquire a corresponding routing configuration strategy;
the soft load foreground is used for providing a configuration page for soft load carrying maintenance personnel so as to enable the soft load carrying maintenance personnel to carry out specific soft load strategy adjustment;
the route execution unit is used for acquiring specific load strategy configuration and completing specific load balancing actions.
In one possible implementation, ingress Reporter Controller queries and listens for target objects on a service cluster through an API provided by the API-Server of Kubernetes.
In one possible implementation, the Ingress Reporter Controller is configured to be responsible for acquiring, together with the custom resource definition CRD, information about an Ingress configuration on a service cluster, and includes:
registering an entry report into a service cluster through a CRD;
ingress Reporter Controller is deployed to the service cluster, and after the configuration information of the Ingress report is sent to the service cluster, ingress Reporter Controller starts to monitor the new addition information and the change information of the Ingress configuration of the corresponding service in the service cluster.
In one possible implementation manner, the soft load background performs merging management on the information of the Ingress configuration of the same domain name, including:
after receiving the information configuration information reported by Ingress Reporter Controller deployed on each service cluster, the soft load background performs merging management on the information configuration information according to the domain name, the port and the path, or performs merging management on the information configuration information according to the identification information of the service.
In one possible implementation, the soft-load background is further configured to support at least one of adding, deleting, modifying, or viewing of the service configuration by the soft-load foreground;
the service configuration comprises the configuration of service application and the configuration of service platform, wherein the configuration of service application comprises at least one of the following: applying label configuration, applying back end access information list configuration, applying routing strategy configuration, applying drainage strategy configuration and applying attribution grouping configuration;
the configuration of the service platform comprises at least one of the following: logging in authentication configuration, role authority control configuration, application batch switching configuration, application grouping setting and attribution grouping configuration of a route execution unit.
In one possible implementation manner, the route execution unit obtains a specific load policy configuration, including:
according to the corresponding relation between the route packet and the application packet, determining the application packet corresponding to the route packet where the route execution unit is located, and obtaining the route configuration strategy of the service application in the application packet;
wherein each application packet includes one or more service applications, each routing packet includes one or more routing execution units, each application packet includes a different service application, and each routing packet includes a different routing execution unit.
In one possible implementation manner, the route execution unit is configured to obtain a specific load policy configuration and complete a specific load balancing action, and includes:
the specific routing strategy is dynamically configured, so that each time the routing strategy is executed, the specific routing strategy is queried, and the routing address to be accessed is determined according to the query result.
In one possible implementation, dynamically configuring a specific routing policy includes:
the route execution unit periodically acquires configuration information of a route strategy at a first preset time interval, determines whether new domain names need to be monitored or the configuration information of original domain names need to be modified according to the configuration information, wherein if the new domain names need to be monitored, the new domain names are monitored, and if the configuration information of the original domain names is modified, the original configuration information is updated into the latest configuration information;
the route execution unit periodically monitors and checks the address information of the rear end according to the acquired configuration information at a second preset time interval, and marks the abnormal address and marks the recovered address information normally.
In one possible implementation, determining the routing address to be accessed according to the query result includes:
the route execution unit monitors corresponding address information or domain name information according to the query result;
the route execution unit obtains the request context information and matches the accessible route address according to the queried route policy, if the route address is not matched, the default load balancing policy is queried, and the route address to be used is calculated to be redirected to the route address to be used.
In one possible implementation, the soft-load background, soft-load foreground, and route execution units all support containerized deployment and can be deployed by means of a discover.
In one possible implementation, the system has an interface to the external other system through which the system performs automatic synchronization of the target information with the external other system.
The beneficial effects that technical scheme that this application embodiment provided brought are: the method can realize unified load balancing policy management and control under the scene of multi-cluster deployment, so that the cross-cluster load balancing configuration can be solved, the cloud native mode is supported to acquire the information of the cross-cluster deployment of the application, the configuration item is automatically generated, the logical grouping of business services can be supported, and the original hardware load balancing scheduling capability is supported to be replaced in a soft load mode. In addition, the system simplifies filling of load balancing configuration items of operation and maintenance personnel through the provided Ingress Reporter Controller, can automatically synchronize the information of the access of the lower business cluster, and can realize the diversity cluster bearing capacity of cross-cluster load balancing through the grouping capacity.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings that are required to be used in the description of the embodiments of the present application will be briefly described below.
Fig. 1 is a schematic structural diagram of a cross-cluster load balancing system according to an embodiment of the present application;
fig. 2 is a schematic diagram of an application cross-cluster load balancing system in a cloud native environment provided in an embodiment of the present application;
fig. 3 is a schematic structural diagram of custom Ingress Reporter Controller provided in an embodiment of the present application;
fig. 4 is a schematic diagram of a cross-cluster load balancing system for interfacing with an external system according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described below with reference to the drawings in the present application. It should be understood that the embodiments described below with reference to the drawings are exemplary descriptions for explaining the technical solutions of the embodiments of the present application, and the technical solutions of the embodiments of the present application are not limited.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and "comprising," when used in this application, specify the presence of stated features, information, data, steps, operations, elements, and/or components, but do not preclude the presence or addition of other features, information, data, steps, operations, elements, components, and/or groups thereof, all of which may be included in the present application. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein indicates at least one of the items defined by the term, e.g. "a and/or B" indicates implementation as "a", or as "a and B".
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Several terms which are referred to in this application are first introduced and explained:
kubernetes: abbreviated as K8s, is an abbreviation in which 8 replaces the 8 characters "ubernete" in the middle of the name. Is an open source for managing containerized applications on multiple hosts in a cloud platform, and the goal of Kubernetes is to make deploying containerized applications simple and efficient (powerful), and Kubernetes provides a mechanism for application deployment, planning, updating, and maintenance that supports automated deployment, large scale scalability, and application containerization management. When an application is deployed in a production environment, multiple instances of the application are typically deployed to load balance application requests. In Kubernetes, multiple containers may be created, one application instance running in each container, and then management, discovery, and access to the set of application instances is implemented through a built-in load balancing policy, where no complex manual configuration and processing by operation and maintenance personnel is required for these details.
Pod: an enhanced container. Pod is the most basic deployment schedule element of the Kubernetes cluster, i.e., the smallest element that a microservice application runs, and each Pod includes multiple containers, which can be considered as an extension or enhancement of a container. The Pod includes a main container and a plurality of auxiliary containers which together perform a specific function. A Pod is formed when multiple processes (containers are also an isolated process) are packed in a Name Space. The application packages of the different processes within the Pod are still independent (each container will have its own mirror image).
Ingress: is a resource type of k8s, and is used for realizing access to the internal application of k8s by using a domain name mode. In fact, an entry of the cluster is accessed from outside the kuuberenets cluster, namely, the entry provides an entry for services in the Kubernetes cluster, and can provide load balancing, SSL termination and a name-based virtual host, and the entry commonly used in the production environment has Treafik, nginx, HAProxy, istio. Typically, the Ingress is deployed on all Node nodes.
Ingress Controller: ingress controller can be understood as a listener, by constantly making a lane with kube-apiserver, perceives the changes of the back-end service and pod in real time, and after obtaining the change information, ingress controller updates the reverse proxy load balancer in combination with the configuration of the Ingress, thereby achieving the service discovery effect. Ingress Controller is not a k8s self-contained component, and in fact, the ingress-controller is simply a generic term, and the user may choose to implement Ingress Controller differently, so that currently Ingress Controller maintained by k8s has only two GCEs of google cloud and ingress-nginx. Generally, ingress Controller is in the form of a pod with the daemon program and reverse proxy program running inside (typically with a nginx load balancer). The daemon is responsible for constantly monitoring changes in the cluster, generating a configuration according to the ingress object and applying the new configuration to the reverse proxy, e.g., the nginx-ingress is the dynamically generated nginx configuration, dynamically updating the update, and loading the program with the new configuration when needed.
The current K8S native load balancing capability (Ingress) is aimed at the load balancing of the PODs in the cluster, and the external hardware loader can solve the load balancing configuration across the cluster, but cannot automatically find the services in the cluster and needs manual configuration. When an application (also called a business application) is deployed across K8S clusters, on the one hand it is required to be able to access PODs within different clusters across cluster load balancing scheduling traffic, and on the other hand it is required to be able to automatically discover application services (also called business application services or business services) within different clusters. And at the same time, the deployment of the load equalizer is required to support elasticity and service logic grouping.
Under the situation, the cross-cluster load balancing system is provided for solving the problem of cross-cluster load balancing configuration, not only is the information of the cross-cluster deployment of the application acquired in a cloud native mode supported, and configuration items are automatically generated, but also the logical grouping of business services can be supported, and the capability of replacing the original hardware load balancing scheduling in a soft load mode is supported.
The technical solutions of the embodiments of the present application and technical effects produced by the technical solutions of the present application are described below by describing several exemplary embodiments. It should be noted that the following embodiments may be referred to, or combined with each other, and the description will not be repeated for the same terms, similar features, similar implementation steps, and the like in different embodiments.
Fig. 1 is a schematic structural diagram of a cross-cluster load balancing system provided in an embodiment of the present application, as shown in fig. 1, where the system includes: ingress report controller Ingress Reporter Controller 101, soft load background 102, soft load foreground 103, and route execution unit 104; wherein,
ingress Reporter Controller is used for acquiring the information of the Ingress configuration on the service cluster together with the Custom Resource definition CRD, and reporting the information of the Ingress configuration to the soft load background according to the configuration of the Custom Resource;
the soft load background is used for synchronizing the information of the Ingress configuration of each service cluster, carrying out merging management on the information of the Ingress configuration of the same domain name, providing a default routing strategy, supporting the soft load foreground to modify a specific service routing strategy, and supporting the routing execution unit to acquire a corresponding routing configuration strategy;
the soft load foreground is used for providing a configuration page for soft load carrying maintenance personnel so as to enable the soft load carrying maintenance personnel to carry out specific soft load strategy adjustment;
the route execution unit is used for acquiring specific load strategy configuration and completing specific load balancing actions.
Specifically, ingress Reporter Controller is deployed on the service K8S cluster, soft load background is deployed independently, soft load foreground is also deployed independently, soft load foreground can also be called configuration foreground, soft load foreground also supports setting of overall load balancing policy, and independent load policy setting for special cases is supported to complete specific drainage operation. The route execution units are also independently deployed, supporting packet deployment, acquiring different sets of load policies per group.
CRD (Custom Resource Definition ) is a best recommended resource extension approach by kubernetes. Through CRD technology, the system provided by the application can register the custom resource (input report) to the kubernetes system, and perform operations such as creating, viewing, modifying, deleting and the like on the custom resource like using the original resource (such as Pod, statefulSet). In addition, the acquisition and validation of the Ingress report configuration information (i.e., the reported destination address, such as the message queue address of the soft load background and the corresponding topic) can be completed through the CRD mechanism.
The system for cross-cluster load balancing under the cloud native scene can realize unified load balancing policy management and control under the scene of multi-cluster deployment, so that the cross-cluster load balancing configuration can be solved, the cloud native mode is supported to acquire information of the application cross-cluster deployment, automatically generate configuration items, the logical grouping of business services can be supported, and the capability of replacing original hardware load balancing scheduling in a soft load mode is supported. In addition, the system simplifies filling of load balancing configuration items of operation and maintenance personnel through the provided Ingress Reporter Controller, can automatically synchronize the information of the access of the lower business cluster, and can realize the diversity cluster bearing capacity of cross-cluster load balancing through the grouping capacity.
The following describes the technical scheme of the embodiment of the present application in detail:
the cross-cluster load balancing system in the embodiment of the application is an application cross-cluster load balancing system in a cloud native environment, and mainly comprises four parts, as shown in fig. 2.
Ingress Reporter Controller (ingress reporting controller): the method is deployed on a service K8S cluster, and is responsible for acquiring the information of the Ingress configuration on the service cluster (such as the service K8S cluster in fig. 2) together with CRD (Custom Resource Definition ), for example, the information of the Ingress configuration on the service cluster can be acquired through API Server, and the acquired information of the Ingress configuration is reported to a corresponding soft load background according to the configuration of custom resource, such as reporting the information of the Ingress in fig. 2 (3), and reporting the information of the Ingress configuration to the soft load background in fig. 2 (3) is Ingress Reporter Controller.
Soft load background: independent deployment is used for synchronizing the Ingress configuration information of each service cluster, simultaneously performing merging management on the Ingress information of the same domain name (namely (4) summarizing the Ingress information in fig. 2), and providing a default routing strategy, namely providing the default routing strategy for a routing execution unit, such as (6) in fig. 2, dynamically acquiring the soft routing information, wherein (6) in fig. 2 means that the routing execution unit dynamically acquires the soft routing information from a soft load background. Meanwhile, the soft load background also supports the configuration foreground to modify the specific service routing strategy, and the routing execution unit obtains the corresponding routing configuration strategy.
Soft load foreground: and the independent deployment is used for providing a configuration page for the soft load carrier to adjust a specific soft load strategy, such as (5) adjusting the soft routing strategy in fig. 2, namely the soft load carrier can adjust the soft routing strategy through a soft load foreground. The soft load foreground also supports setting of an overall load balancing strategy, and supports independent load strategy setting aiming at special cases so as to complete specific drainage operation.
Route execution unit: the independent deployment is used for acquiring specific load policy configuration and completing specific load balancing actions, for example, the route execution unit can dynamically acquire soft route information (or route policy) from a soft load background, and the route execution unit can dynamically acquire the soft route information as shown in (6) in fig. 2 to realize the acquisition of the specific load policy configuration. In other words, the route execution unit may obtain a specific load policy configuration according to the obtained specific route policy, so as to perform load balancing and complete specific load balancing actions.
The routing execution unit further supports packet deployment and obtains different load policy sets by groups, such as a packet 1 and a packet 2 in fig. 2, where the packet 1 includes two different routing execution units (such as a routing execution unit a and a routing execution unit B), the packet 2 includes another two different routing execution units (such as a routing execution unit C and a routing execution unit D), and the routing execution unit in the packet 1 and the routing execution unit in the packet 2 may obtain different load policy sets by groups. After the route execution unit obtains the load policy, policy routing can be performed according to the obtained load policy, that is, (8) in fig. 2 is performed according to the policy routing.
The following describes the specific working principle:
1. the business person creates CRD and CRD Controller (CRD and Controller are created as in fig. 2 (1)), and CRD (Custom Resource Definition ) is a resource extension method recommended by kubernetes to the greatest extent. Through CRD technology, the system of the application can register the custom resources (such as the Ingress report) into the kubernetes system, namely, the Ingress report is registered into the business cluster (such as the kubernetes) through CRD, and the operations of creating, viewing, modifying, deleting and the like are performed on the custom resources like using the native resources (such as Pod, statefulSet). The acquisition of the Ingress report configuration information (i.e. the reported target address, such as the message queue address of the soft load background and the corresponding topic) can be completed through a CRD mechanism, and the information is effective.
Wherein, the CRD is the secondary development capability added after the Kubernetes 1.7 to expand the Kubernetes API, new resource types can be added into the Kubernetes API through the CRD, and the functions greatly improve the expansion capability of the Kubernetes without modifying the Kubernetes source code or creating a custom API server. The CRD is a mode capable of expanding the native kubenetes API interface without encoding, and is suitable for expanding custom interfaces and functions of kubenetes.
When a new Custom Resource Definition (CRD) is created, kubernetes API Server responds by creating a new RESTful resource path, either in the namespace or cluster-wide, as specified in the scope field of the CRD. As with existing built-in objects, deleting a namespace will delete all custom objects within the namespace. The CRD itself does not distinguish namespaces and is available to all namespaces.
In practical applications, custom resources may be added to a cluster in two ways: (1) Custom Resource Definitions (CRDs): the method is easier to use and does not need coding; (2) API Aggregation: coding is needed that allows for more customized implementations to be provided by way of an aggregation layer.
2. Implementation of custom controllers (e.g., ingress Reporter Controller). As shown in FIG. 3, custom controllers actually query and snoop for these objects Service, ingress on a traffic cluster (e.g., K8S) through an API provided by the Api-server of Kubernetes. Wherein the implementation of the listening function can be simplified by ks.io/client-go/filters. Equivalent to Ingress Reporter Controller querying and listening to target objects on a service cluster through an API provided by API-Server of Kubernetes, wherein the target objects include, but are not limited to Service, ingress, etc.
3. Through the implementation of the steps 1 and 2, after the service personnel deploys Ingress Reporter Controller to the service cluster (as shown in (2) of fig. 2, the service personnel configures the Ingress), then after the configuration information of the Ingress report is issued to the service cluster through a command line, ingress Reporter Controller starts to monitor the newly added information and the changed information of the Ingress configuration of the service cluster, and after the service system completes the configuration of the Ingress, ingress Reporter Controller reports the configured configuration information (including the newly added information and the changed information of the Ingress configuration) to the soft load background.
In other words, as can be seen from the implementation of steps 1 and 2 above, the Ingress Reporter Controller is configured to be responsible for acquiring the Ingress configuration information on the service cluster together with the custom resource definition CRD, and includes: registering an entry report into the service cluster through a CRD; next, after Ingress Reporter Controller is deployed to the service cluster and after the configuration information of the Ingress report is sent to the service cluster, ingress Reporter Controller starts to monitor Ingress configuration change information of a corresponding service in the service cluster.
4. After receiving the information of the Ingress configuration reported by Ingress Reporter Controller deployed on each service cluster, the soft load background gathers different routing addresses of the same service. There are two summary approaches, one is to merge the same services based on domain name, port and path. Another way is to associate or merge based on service identification (e.g. application coding). Other systems (e.g., CMDB) are required to be interfaced to obtain an association of application code and corresponding Ingress. Equivalently, the soft load background performs merging management on the Ingress configuration information of the same domain name, which includes: after receiving the information configuration information reported by Ingress Reporter Controller deployed on each service cluster, the soft load background performs merging management on the information configuration information according to the domain name, the port and the path, or performs merging management on the information configuration information according to the identification information (such as application code) of the service, for example, if the two reported information configuration information contain the same service identification (such as the same application code), then the two information configuration information are merged and managed.
5. The soft load background supports the soft load foreground to perform service configuration addition, deletion, modification and checking. Specific service configurations include configuration of service applications and configuration of service platforms, and the configuration of service applications includes, but is not limited to, configuration of application labels, application back-end access information lists (such as forms of Ingress address+port+path), application routing policies (such as weight of back-end access, load balancing policies, etc.), independent access routing policies (such as specific routing based on request header, parameters, cookies, etc.), application attribution grouping information, etc.; the configuration of the service platform includes but is not limited to login authentication, role authority control, application batch switching, application grouping setting, attribution grouping of the route execution units and the like. In addition, the soft-load background may also provide APIs for external systems (e.g., CMDB) to call.
That is, the soft load background is further configured to support the soft load foreground to perform at least one of adding, deleting, modifying or viewing of service configuration; the service configuration comprises configuration of service application and configuration of service platform, wherein the configuration of the service application comprises at least one of the following: applying label configuration, applying back end access information list configuration, applying routing strategy configuration, applying drainage strategy configuration and applying attribution grouping configuration; the configuration of the service platform comprises at least one of the following: logging in authentication configuration, role authority control configuration, application batch switching configuration, application grouping setting and attribution grouping configuration of a route execution unit.
6. The soft load foreground provides Web access entrance, which can support user login authentication, role authority control, application label modification, application grouping setting, application batch switching, attribution grouping of a route execution unit, general application routing policy configuration, independent application drainage policy configuration and the like. As in (5) of fig. 2, the soft-load carrier may adjust the soft-routing policy via the soft-load foreground.
Wherein application grouping refers to grouping management of a plurality of business applications (e.g., 2, 3, or more business applications). If there are 3 service applications simultaneously, namely service application a, service application B and service application C, then: the service application a and the service application B may be grouped into one group (e.g. denoted as application group 1) and the service application C may be grouped into another group (e.g. denoted as application group 2) according to the requirements, the service application a and the service application C may be grouped into one group (e.g. denoted as application group 1) and the service application B may be grouped into another group (e.g. denoted as application group 2) according to the requirements, and of course, the service application a, the service application B and the service application C may be grouped into other possible cases according to the requirements, which the embodiment of the present application is not limited.
The home packet of the route execution unit refers to packet management or packet deployment of the route execution unit. If there are 3 route execution units simultaneously, namely, route execution unit a, route execution unit B and route execution unit C, then: the routing execution units a and C may be grouped into one group (e.g. denoted as routing packet 1) and the routing execution units B may be grouped into another group (e.g. denoted as routing packet 2) according to the requirements, or the routing execution units B and C may be grouped into one group (e.g. denoted as routing packet 1) and the routing execution units a may be grouped into another group (e.g. denoted as routing packet 2) according to the requirements, which may, of course, also be grouped into other possible cases according to the requirements.
7. The specific function of the route execution unit is specific route policy execution after user access, which can be realized through dynamic configuration, for example, each time the route policy is executed, the specific route policy is queried, and the execution of the specific route policy is determined through the matching condition (i.e. the query condition or query result of the route policy). The method can be divided into two processes, wherein one is a dynamic effective process of configuration and the other is a route information determining process accessed by a user. The dynamic validation process of the configuration can be understood as a process of dynamically configuring a specific routing policy, and the route information determining process accessed by the user can be understood as a process of determining a routing address to be accessed by a query result of the routing policy. In other words, the functions specifically performed by the route execution unit include: the specific routing strategy is dynamically configured, so that each time the routing strategy is executed, the specific routing strategy is queried, and the routing address to be accessed is determined according to the query result.
Considering that not only the service applications (which may also be referred to as applications or services) may be packet-managed, i.e. the service applications are supported to be packet-managed, for example, the above-mentioned dividing the plurality of service applications (the service application a, the service application B, and the service application C) into different application packets (which may also be packet-managed, for example, the above-mentioned dividing the plurality of route execution units (the route execution unit a, the route execution unit B, and the route execution unit C) into different route packets, but also the above-mentioned dividing the plurality of route execution units (the route execution unit a, the route execution unit B, and the route execution unit C) into different route packets (i.e. the packet 1 and the packet 2) in fig. 2; thus, each time a particular route configuration policy is acquired or queried, the route execution unit may perform the following:
according to the corresponding relation (or mapping relation) between the route packets and the application packets, determining the application packets corresponding to the route packets where the route execution units are located, and acquiring or inquiring the route configuration strategy of the service applications in the application packets, wherein each application packet comprises one or more service applications, each route packet comprises one or more route execution units, each application packet respectively comprises different service applications, and each route packet respectively comprises different route execution units.
By performing the processing of route grouping on the route execution unit and performing the processing of service grouping on the service application, the route execution unit can support the acquisition of route configuration strategies according to the service grouping, so that the route execution unit can query the corresponding service grouping according to the corresponding relation between the route grouping and the service grouping, and only acquire the route strategy configuration of the service application in the corresponding service grouping, thereby realizing different route configuration strategies acquired by the route execution units of different route groupings and further performing isolation processing.
In one example, if there are route packet 1 (including route execution unit a and route execution unit B) and route packet 2 (including route execution unit C and route execution unit D), there is application packet 1 (including service application a) and application packet 2 (including service application B and service application C), and there is a one-to-one correspondence between route packet 1 and application packet 1, and there is a one-to-one correspondence between route packet 2 and application packet 2:
when the route execution unit A obtains or inquires the route configuration strategy, the route execution unit A is firstly associated to the application packet 1 according to the corresponding relation between the route packet 1 where the route execution unit A is located and the application packet 1, and then the route execution unit A obtains or inquires the route configuration strategy of the service application A in the application packet 1. Similarly, each time the route execution unit C obtains or queries the route configuration policy, the route execution unit C first associates with the application packet 2 according to the correspondence between the route packet 2 and the application packet where the route execution unit C is located, and then obtains or queries the route configuration policies of the service application B and the service application C in the application packet 2.
In one possible implementation, the dynamic validation process of the configuration may include the following steps:
step 1, a route execution unit obtains configuration information of a route strategy; the process of obtaining the configuration information of the routing policy also determines an application packet corresponding to the routing packet where the routing execution unit is located according to the corresponding relationship (or mapping relationship) between the routing packet and the application packet, and obtains or queries the routing configuration policy of the service application in the application packet, where specific content is referred to in the foregoing, and details are not repeated herein.
Step 2, judging whether the new domain name needs to be monitored or the configuration information in the original domain name is modified according to the configuration information;
step 3, if the new domain name needs to be monitored, the monitoring of the new domain name is completed;
step 4, if the configuration information of the original domain name is confirmed to be modified, the original configuration information is updated to be in the latest state (such as the latest configuration information);
step 5, executing the steps 1 to 4 at regular time;
step 6, monitoring and checking the address of the rear end aiming at the acquired configuration information;
and 7, marking the abnormal address (such as abnormal marking) and marking the address information of recovering to be normal (such as normal marking).
And 8, executing the steps 6 to 7 at regular time.
It should be noted that, the dynamic validation process of the configuration described in the above steps 1 to 8 may also be expressed as a process of dynamically configuring a specific routing policy, and the process may be: the route execution unit periodically acquires configuration information of a route strategy at a first preset time interval, determines whether new domain names need to be monitored or the configuration information of original domain names need to be modified according to the configuration information, wherein if the new domain names need to be monitored, the new domain names are monitored, and if the configuration information of the original domain names is modified, the original configuration information is updated into the latest configuration information; the route execution unit periodically monitors and checks the address information of the rear end according to the acquired configuration information at a second preset time interval, and marks the abnormal address and marks the recovered address information normally.
The route information determining process accessed by the user can comprise the following steps:
step 1, a user requests to a specific route execution unit;
step 2, the route execution unit monitors the corresponding address information or domain name information;
step 3, the route execution unit obtains the context information of the user request, including domain name + port + address, request header, cookie, parameters in URL, submitted data, etc.;
step 4, matching a back-end address (i.e. a routing address) accessible to a user according to a configured load strategy (specific scene drainage);
step 5, if the corresponding route address is not matched, inquiring a default load balancing strategy, and calculating a back-end address (namely a route address) to be used;
and 6, redirecting the user to the calculated back-end address to be used.
The process of determining the routing information accessed by the user described in the steps 1 to 6 may also be expressed as a process of determining the routing address to be accessed according to the query result of the routing policy. The process of determining the routing address to be accessed according to the query result of the routing policy may be: the route execution unit determines corresponding address information or domain name information according to the query result; and then, the route execution unit acquires the request context information and matches the accessible route address according to the queried route policy, if the route address is not matched with the corresponding route address, the default load balancing policy is queried, and the route address required to be used is calculated so as to be redirected to the route address required to be used.
When the external system presence service system manages the same application in different clusters of Ingress packet information and application packet information, the cross-cluster load balancing system of the present application supports interfacing to external systems, such as CMDB (Configuration Management Database ), as shown in fig. 4. The difference between the cross-cluster load balancing system (i.e. the system shown in fig. 4) of the external system and the original cross-cluster load balancing system (i.e. the system shown in fig. 2) is that: partial information (e.g., application tags and application grouping information) can be synchronized automatically without requiring the soft-load operator to manually maintain through the page. That is, the cross-cluster load balancing system of the present application has an interface to other external systems through which the cross-cluster load balancing system of the present application performs automatic synchronization of target information with other external systems, where the target information includes, but is not limited to, application labels, application grouping information, and the like.
It should be noted that, in the cross-cluster load balancing system of the present application, the soft load foreground, the soft load background, the route execution unit and the like support containerized deployment, and may be deployed by a depoyment manner. That is, the soft-load background, the soft-load foreground, and the routing execution unit all support containerized deployment, and can be deployed by a deployment method.
Therefore, the embodiment of the application provides a solution for cross-cluster load balancing in a cloud primary scene, and unified load balancing policy management and control can be realized in a scene of multi-cluster deployment application; filling of load balancing configuration items of operation and maintenance personnel is simplified through the Ingress Reporter Controller, and information of lower service clusters can be automatically synchronized; the diversity group bearing capacity of cross-cluster load balancing can be realized through the grouping capacity; the external system can be docked, so that the filling of the load balancing configuration items is further simplified; and, the whole cross-cluster load balancing system supports cloud native deployment.
The foregoing is merely an optional implementation manner of the implementation scenario of the application, and it should be noted that, for those skilled in the art, other similar implementation manners based on the technical ideas of the application are adopted without departing from the technical ideas of the application, and also belong to the protection scope of the embodiments of the application.

Claims (11)

1. A cross-cluster load balancing system is characterized by comprising an entrance report controller Ingress Reporter Controller, a soft load background, a soft load foreground and a route execution unit; wherein,
the Ingress Reporter Controller is configured to acquire the information of the Ingress configuration on the service cluster together with the Custom Resource definition CRD, and report the information of the Ingress configuration to the soft load background according to the configuration of the Custom Resource;
the soft load background is used for synchronizing the information of the Ingress configuration of each service cluster, carrying out merging management on the information of the Ingress configuration of the same domain name, providing a default routing strategy, supporting the soft load foreground to modify a specific service routing strategy, and supporting the routing execution unit to acquire a corresponding routing configuration strategy;
the soft load foreground is used for providing a configuration page for soft load carrying maintenance personnel so as to enable the soft load carrying maintenance personnel to carry out specific soft load strategy adjustment;
the route execution unit is used for acquiring specific load strategy configuration and completing specific load balancing actions.
2. The system of claim 1, wherein the Ingress Reporter Controller queries and listens for target objects on the service cluster through an API provided by an API-Server of Kubernetes.
3. The system according to claim 1 or 2, wherein the Ingress Reporter Controller is configured to obtain the Ingress configuration information on the service cluster together with a custom resource definition CRD, and includes:
registering an entry report into the service cluster through the CRD;
after the Ingress Reporter Controller is deployed to the service cluster and the configuration information of the Ingress report is sent to the service cluster, the Ingress Reporter Controller starts to monitor the new addition information and the change information of the Ingress configuration of the corresponding service in the service cluster.
4. A system according to any one of claims 1-3, wherein the soft-load background performs merging management on Ingress configuration information of the same domain name, and includes:
after receiving the information configuration information reported by Ingress Reporter Controller deployed on each service cluster, the soft load background performs merging management on the information configuration information according to a domain name, a port and a path, or performs merging management on the information configuration information according to service identification information.
5. The system of any of claims 1-3, wherein the soft-load background is further configured to support at least one of an addition, a deletion, a modification, or a view of a service configuration by the soft-load foreground;
the service configuration comprises the configuration of a service application and the configuration of a service platform, wherein the configuration of the service application comprises at least one of the following: applying label configuration, applying back end access information list configuration, applying routing strategy configuration, applying drainage strategy configuration and applying attribution grouping configuration;
the configuration of the service platform comprises at least one of the following: logging in authentication configuration, role authority control configuration, application batch switching configuration, application grouping setting and attribution grouping configuration of a route execution unit.
6. The system according to claim 1 or 5, wherein the route execution unit obtains a specific load policy configuration, comprising:
according to the corresponding relation between the route packet and the application packet, determining the application packet corresponding to the route packet where the route execution unit is located, and obtaining the route configuration strategy of the service application in the application packet;
wherein each application packet includes one or more service applications, each routing packet includes one or more routing execution units, each application packet includes a different service application, and each routing packet includes a different routing execution unit.
7. The system according to claim 1 or 6, wherein the routing execution unit is configured to obtain a specific load policy configuration and complete a specific load balancing action, and the routing execution unit includes:
the specific routing strategy is dynamically configured, so that each time the routing strategy is executed, the specific routing strategy is queried, and the routing address to be accessed is determined according to the query result.
8. The system of claim 7, wherein the dynamically configuring a particular routing policy comprises:
the route execution unit periodically acquires configuration information of a route strategy at a first preset time interval, determines whether new domain names need to be monitored or configuration information of original domain names need to be modified according to the configuration information, wherein if the new domain names need to be monitored, the new domain names are monitored, and if the configuration information of the original domain names is modified, the original configuration information is updated to be the latest configuration information;
the route execution unit periodically monitors and checks the address information of the rear end according to the acquired configuration information at a second preset time interval, and marks the abnormal address and marks the address information which is recovered to be normal.
9. The system of claim 7, wherein the determining the routing address to be accessed based on the query result comprises:
the route execution unit monitors corresponding address information or domain name information according to the query result;
and the route execution unit acquires the request context information, matches the accessible route address according to the queried route policy, queries a default load balancing policy if the accessible route address is not matched with the corresponding route address, and calculates the route address to be used so as to redirect the route address to be used.
10. The system of any of claims 1-9, wherein the soft-load background, the soft-load foreground, and the route execution unit each support containerized deployment and are capable of deployment by means of a depoyment.
11. The system according to any of claims 1-9, characterized in that the system has an interface to an external other system, through which the system performs automatic synchronization of target information with the external other system.
CN202210674605.3A 2022-06-14 2022-06-14 Cross-cluster load balancing system Active CN115086330B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210674605.3A CN115086330B (en) 2022-06-14 2022-06-14 Cross-cluster load balancing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210674605.3A CN115086330B (en) 2022-06-14 2022-06-14 Cross-cluster load balancing system

Publications (2)

Publication Number Publication Date
CN115086330A CN115086330A (en) 2022-09-20
CN115086330B true CN115086330B (en) 2024-03-01

Family

ID=83252028

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210674605.3A Active CN115086330B (en) 2022-06-14 2022-06-14 Cross-cluster load balancing system

Country Status (1)

Country Link
CN (1) CN115086330B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11954525B1 (en) 2022-09-21 2024-04-09 Zhejiang Lab Method and apparatus of executing collaborative job for spark faced to multiple K8s clusters
CN115242877B (en) * 2022-09-21 2023-01-24 之江实验室 Spark collaborative computing and operating method and device for multiple K8s clusters
CN115883258B (en) * 2023-02-15 2023-08-01 北京微步在线科技有限公司 IP information processing method, device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111343037A (en) * 2019-08-19 2020-06-26 海通证券股份有限公司 Flow monitoring method and device for cloud platform load according to application, and computer equipment
CN112995273A (en) * 2021-01-28 2021-06-18 腾讯科技(深圳)有限公司 Network call-through scheme generation method and device, computer equipment and storage medium
CN113094182A (en) * 2021-05-18 2021-07-09 联想(北京)有限公司 Load balancing processing method and device for service and cloud server
WO2021205212A1 (en) * 2020-04-08 2021-10-14 Telefonaktiebolaget Lm Ericsson (Publ) Traffic controller for cloud native deployment
CN113572831A (en) * 2021-07-21 2021-10-29 重庆星环人工智能科技研究院有限公司 Communication method between Kubernetes clusters, computer equipment and medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11863352B2 (en) * 2020-07-30 2024-01-02 Vmware, Inc. Hierarchical networking for nested container clusters

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111343037A (en) * 2019-08-19 2020-06-26 海通证券股份有限公司 Flow monitoring method and device for cloud platform load according to application, and computer equipment
WO2021205212A1 (en) * 2020-04-08 2021-10-14 Telefonaktiebolaget Lm Ericsson (Publ) Traffic controller for cloud native deployment
CN112995273A (en) * 2021-01-28 2021-06-18 腾讯科技(深圳)有限公司 Network call-through scheme generation method and device, computer equipment and storage medium
CN113094182A (en) * 2021-05-18 2021-07-09 联想(北京)有限公司 Load balancing processing method and device for service and cloud server
CN113572831A (en) * 2021-07-21 2021-10-29 重庆星环人工智能科技研究院有限公司 Communication method between Kubernetes clusters, computer equipment and medium

Also Published As

Publication number Publication date
CN115086330A (en) 2022-09-20

Similar Documents

Publication Publication Date Title
CN115086330B (en) Cross-cluster load balancing system
KR102425996B1 (en) Multi-cluster Ingress
CN112087312B (en) Method, device and equipment for providing edge service
CN113826363B (en) Consistent route advertisement between redundant controllers in a global network access point
EP3046288B1 (en) Virtual network function network elements management method, device and system
US8200789B2 (en) Method, system and program product for automated topology formation in dynamic distributed environments
TWI724106B (en) Business flow control method, device and system between data centers
US9999030B2 (en) Resource provisioning method
US11928514B2 (en) Systems and methods providing serverless DNS integration
US10666508B2 (en) Unified software defined networking configuration management over multiple hosting environments
CN115380513A (en) Network management system for federated multi-site logical networks
WO2014024863A1 (en) Load distribution method taking into account each node in multi-level hierarchy
CN102412978A (en) Method for carrying out network configuration for VM and system thereof
CN110474802B (en) Equipment switching method and device and service system
CN113778623B (en) Resource processing method and device, electronic equipment and storage medium
US11627010B2 (en) Method to support redundancy switching of virtual MAC cores
CN103945000A (en) Load balance method and load balancer
Yang et al. Algorithms for fault-tolerant placement of stateful virtualized network functions
EP3534578B1 (en) Resource adjustment method, device and system
US20190089773A1 (en) Segmentation server cluster for managing a segmentation policy
CN107257295B (en) Scheduling method of distributed architecture software defined network controller
CN101534255A (en) A method and device for realizing oriented processing of certain request
CN112655185B (en) Apparatus, method and storage medium for service allocation in a software defined network
CN116319963A (en) Service management method, system, terminal equipment and storage medium
CN108965494A (en) Data transmission method and device in data system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant