CN115086330A - Cross-cluster load balancing system - Google Patents

Cross-cluster load balancing system Download PDF

Info

Publication number
CN115086330A
CN115086330A CN202210674605.3A CN202210674605A CN115086330A CN 115086330 A CN115086330 A CN 115086330A CN 202210674605 A CN202210674605 A CN 202210674605A CN 115086330 A CN115086330 A CN 115086330A
Authority
CN
China
Prior art keywords
routing
configuration
ingress
service
application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210674605.3A
Other languages
Chinese (zh)
Other versions
CN115086330B (en
Inventor
黄德光
薛浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Asiainfo Technologies China Inc
Original Assignee
Asiainfo Technologies China Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Asiainfo Technologies China Inc filed Critical Asiainfo Technologies China Inc
Priority to CN202210674605.3A priority Critical patent/CN115086330B/en
Publication of CN115086330A publication Critical patent/CN115086330A/en
Application granted granted Critical
Publication of CN115086330B publication Critical patent/CN115086330B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/14Routing performance; Theoretical aspects

Abstract

The embodiment of the application provides a cross-cluster load balancing system, and relates to the technical field of cloud primitive. The system comprises an entrance report controller, a soft load background and a user-defined resource definition (CRD), wherein the entrance report controller is used for acquiring Ingress configuration information on a service cluster together with the CRD and reporting the Ingress configuration information to the soft load background according to the configuration of the user-defined resource CustomResource; the soft load background is used for synchronizing Ingress configuration information of each service cluster, merging and managing the Ingress configuration information with the same domain name, providing a default routing strategy, supporting the soft load foreground to modify a specific service routing strategy and supporting the routing execution unit to acquire a corresponding routing configuration strategy; the soft load foreground is used for providing a configuration page for the soft load operation and maintenance personnel so as to adjust a specific soft load strategy; the route execution unit is used for acquiring specific load strategy configuration and completing specific load balancing actions.

Description

Cross-cluster load balancing system
Technical Field
The application relates to the technical field of cloud and native, in particular to a cross-cluster load balancing system.
Background
The implementation mode of single cluster load balancing under the cloud native environment comprises the following steps: (1) using Ingress to provide load balancing capability, for example, by deploying an Ingress Controller (Ingress Controller), a load balancing function of Pod corresponding to the same service in the cluster is completed. (2) The implementation is realized by using an external LoadBalancer device and a nodoport mode, for example, in the nodoport mode, the access to an externally provided service can be provided through a node address and a specified port of a K8S cluster, and load balancing devices are usually deployed outside the cluster to complete load balancing among different nodes.
However, when an application is deployed across a K8S cluster, the K8S does not provide a load balancing capability to control the load balancing capability among different clusters, and a load balancing policy among clusters is configured through hardware load balancing, so that the problem that the change of a lower endpoint cannot be perceived exists, and the problem that manual management is required for all configurations exists.
Disclosure of Invention
The embodiment of the application provides a cross-cluster load balancing system, which can solve the problem of cross-cluster load balancing. The technical scheme is as follows:
according to an aspect of an embodiment of the present application, a cross-cluster load balancing system is provided, where the system includes an Ingress report Controller, a soft load background, a soft load foreground, and a route execution unit; wherein the content of the first and second substances,
the Ingress Reporter Controller is used for acquiring Ingress configuration information on the service cluster together with the user-defined Resource definition CRD and reporting the Ingress configuration information to the soft load background according to the configuration of the user-defined Resource Custom;
the soft load background is used for synchronizing Ingress configuration information of each service cluster, merging and managing the Ingress configuration information with the same domain name, providing a default routing strategy, supporting the soft load foreground to modify a specific service routing strategy and supporting the routing execution unit to acquire a corresponding routing configuration strategy;
the soft load foreground is used for providing a configuration page for the soft load operation and maintenance personnel so as to adjust a specific soft load strategy;
the route execution unit is used for acquiring specific load strategy configuration and completing specific load balancing actions.
In a possible implementation manner, the Ingress Reporter Controller queries and monitors the target object on the service cluster through an API provided by an API-Server of kubernets.
In a possible implementation manner, the Ingress Reporter Controller is configured to be responsible for acquiring Ingress configuration information on the service cluster together with the user-defined resource definition CRD, and includes:
registering an entry report Ingress Reporter to a service cluster through a CRD;
after the Ingress Reporter Controller is deployed to the service cluster and the configuration information of the Ingress Reporter is sent to the service cluster, the Ingress Reporter Controller starts to monitor the new information and the change information of the Ingress configuration of the corresponding service in the service cluster.
In a possible implementation manner, the soft load background performs merge management on Ingress configuration information of the same domain name, including:
after receiving the Ingress configuration information reported by the Ingress repeater Controller deployed on each service cluster, the soft load background merges and manages the Ingress configuration information according to the domain name, the port and the path, or merges and manages the Ingress configuration information according to the identification information of the service.
In a possible implementation manner, the soft load background is further configured to support the soft load foreground to perform at least one of adding, deleting, modifying, or checking the service configuration;
the service configuration comprises configuration of a service application and configuration of a service platform, wherein the configuration of the service application comprises at least one of the following items: configuring an application label, configuring an application back-end access information list, configuring an application routing strategy, configuring an application drainage strategy and configuring an application attribution grouping;
the configuration of the service platform comprises at least one of the following: login authentication configuration, role authority control configuration, application batch switching configuration, application grouping setting and attribution grouping configuration of the route execution unit.
In a possible implementation manner, the acquiring, by the route execution unit, a specific load policy configuration includes:
determining an application group corresponding to the routing group where the routing execution unit is located according to the corresponding relation between the routing group and the application group, and acquiring a routing configuration strategy of a service application in the application group;
each application group comprises one or more service applications, each routing group comprises one or more routing execution units, each application group respectively comprises different service applications, and each routing group respectively comprises different routing execution units.
In a possible implementation manner, the route execution unit is configured to obtain a specific load policy configuration and complete a specific load balancing action, and includes:
by dynamically configuring a specific routing strategy, the specific routing strategy is inquired each time when the routing strategy is executed, and a routing address to be accessed is determined according to an inquiry result.
In a possible implementation manner, dynamically configuring a specific routing policy includes:
the routing execution unit periodically acquires the configuration information of a routing strategy at a first preset time interval, and determines whether the configuration information of a new domain name or an original domain name needs to be monitored or modified according to the configuration information, wherein if the new domain name needs to be monitored, the monitoring of the new domain name is completed, and if the configuration information of the original domain name is determined to be modified, the original configuration information is updated to the latest configuration information;
and the routing execution unit periodically monitors and checks the address information at the rear end aiming at the acquired configuration information at a second preset time interval, and performs exception marking on the abnormal address and performs normal marking on the address information recovered to be normal.
In one possible implementation manner, determining a routing address to be accessed according to a query result includes:
the route execution unit monitors corresponding address information or domain name information according to the query result;
and the route execution unit acquires the request context information, matches the accessible routing address according to the inquired routing strategy, inquires a default load balancing strategy if the accessible routing address is not matched with the corresponding routing address, and calculates the routing address required to be used so as to redirect to the routing address required to be used.
In a possible implementation manner, the soft load background, the soft load foreground and the route execution unit all support containerized deployment and can be deployed in a deployment manner.
In one possible implementation, the system has an interface to other external systems, and the system performs automatic synchronization of the target information with the other external systems through the interface.
The technical scheme provided by the embodiment of the application has the following beneficial effects: the method can realize unified load balancing strategy management and control in the scene of applying multi-cluster deployment, thereby solving the problem of cross-cluster load balancing configuration, not only supporting the cloud-native mode to acquire the information of applying cross-cluster deployment and automatically generate configuration items, but also supporting the logic grouping of business services and the capability of replacing the original hardware load balancing scheduling in a soft load mode. In addition, the system of the application simplifies the filling of the load balancing configuration items of the operation and maintenance personnel through the provided Ingress Reporter Controller, can automatically synchronize the Ingress information of the lower-level service cluster, and can realize the load bearing capacity of the cross-cluster load balancing diversity group through the grouping capacity.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments of the present application will be briefly described below.
Fig. 1 is a schematic structural diagram of a cross-cluster load balancing system according to an embodiment of the present application;
fig. 2 is a schematic diagram of an application cross-cluster load balancing system in a cloud native environment according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a custom Ingress Reporter Controller provided in an embodiment of the present application;
fig. 4 is a schematic diagram of a cross-cluster load balancing system interfacing with an external system according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described below in conjunction with the drawings in the present application. It should be understood that the embodiments set forth below in connection with the drawings are exemplary descriptions for explaining technical solutions of the embodiments of the present application, and do not limit the technical solutions of the embodiments of the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the terms "comprises" and/or "comprising," when used in this specification in connection with embodiments of the present application, specify the presence of stated features, information, data, steps, operations, elements, and/or components, but do not preclude the presence or addition of other features, information, data, steps, operations, elements, components, and/or groups thereof, as embodied in the art. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein indicates at least one of the items defined by the term, e.g., "a and/or B" indicates either an implementation as "a", or an implementation as "a and B".
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The terms referred to in this application will be introduced and explained as follows:
kubernets: abbreviation K8s is an abbreviation resulting from 8 replacing the 8 characters "ubernet" in the middle of the name. The Kubernetes aims to make the application of container deployment simple and efficient (powerfull), provides a mechanism for application deployment, planning, updating and maintenance, and supports automatic deployment, large-scale scalable and application container management. When an application is deployed in a production environment, multiple instances of the application are typically deployed to load balance application requests. In kubernets, a plurality of containers can be created, each container runs an application instance, and then management, discovery and access of the group of application instances are realized through a built-in load balancing strategy, and the details do not need operation and maintenance personnel to perform complicated manual configuration and processing.
Pod: an enhanced container. The Pod is the most basic deployment scheduling unit of the Kubernetes cluster, that is, the minimum unit for the micro-service application to run, and each Pod includes a plurality of containers, which can be regarded as an extended or enhanced container of the container. The Pod includes a main container and a plurality of auxiliary containers, which together perform a specific function. When multiple processes (containers are also an isolated process) are packed in a Name Space, a Pod is formed. The application wrappers for the different processes within the Pod remain separate (each container will have its own mirror image).
Ingress: is a resource type of k8s, Ingress is used to access the internal application of k8s by means of domain name. In fact, one portal for accessing the cluster from outside the kuberenets cluster, i.e., Ingress provides a portal for services in the kubernets cluster, which can provide load balancing, SSL termination, and name-based virtual hosts, and Ingress, which is commonly used in production environments, is Treafik, Nginx, HAProxy, isio, and so on. Typically, Ingress is deployed on all Node nodes.
Ingress Controller: the Ingress controller can be understood as a listener, constantly interacts with the kube-apiserver to sense the changes of the back-end service and the pod in real time, and after the change information is obtained, the Ingress controller updates the reverse proxy load balancer by combining with the configuration of the Ingress, so that the service discovery effect is achieved. The Ingress Controller is not a component of the k8s, and actually the Ingress-Controller is just a general term, and the user can select different Ingress controllers to implement, and currently, the Ingress Controller maintained by the k8s only has two of GCE and Ingress-nginx of the google cloud. Generally, the Ingress Controller is in the form of a pod, running inside a daemon program and a reverse agent (typically with a nginx load balancer). The daemon is responsible for continuously monitoring the change of the cluster, generating configuration according to the ingress object and applying the new configuration to the reverse proxy, for example, nginx-ingress is used for dynamically generating nginx configuration, dynamically updating the upstream, and applying the new configuration by a reload program when needed.
Currently, the native load balancing capability (Ingress) of K8S is directed to load balancing of PODs within a cluster, and although an external hardware loader can solve load balancing configuration across clusters, it cannot automatically discover services within a cluster, and needs manual configuration. When applications (also referred to as business applications) are deployed across a K8S cluster, it is required to be able to schedule traffic across the cluster load balancing to access PODs within different clusters on the one hand, and to be able to automatically discover application services (also referred to as business application services or business services) within different clusters on the other hand. Meanwhile, corresponding to the deployment of the load balancer, the load balancer is required to be capable of supporting the flexibility and the service logic grouping.
Under the circumstance, the application provides a cross-cluster load balancing system to solve the problem of cross-cluster load balancing configuration, and the system not only supports the cloud-native mode to acquire the information of application cross-cluster deployment and automatically generate configuration items, but also can support the logic grouping of business services and support the capability of replacing the original hardware load balancing scheduling in a soft load mode.
The technical solutions of the embodiments of the present application and the technical effects produced by the technical solutions of the present application will be described below through descriptions of several exemplary embodiments. It should be noted that the following embodiments may be referred to, referred to or combined with each other, and the description of the same terms, similar features, similar implementation steps and the like in different embodiments is not repeated.
Fig. 1 is a schematic structural diagram of a cross-cluster load balancing system provided in an embodiment of the present application, and as shown in fig. 1, the system includes: an Ingress report Controller Ingress Reporter Controller 101, a soft load background 102, a soft load foreground 103 and a route execution unit 104; wherein the content of the first and second substances,
the Ingress Reporter Controller is used for acquiring Ingress configuration information on the service cluster together with the user-defined Resource definition CRD and reporting the Ingress configuration information to the soft load background according to the configuration of the user-defined Resource Custom;
the soft load background is used for synchronizing Ingress configuration information of each service cluster, merging and managing the Ingress configuration information with the same domain name, providing a default routing strategy, supporting the soft load foreground to modify a specific service routing strategy and supporting the routing execution unit to acquire a corresponding routing configuration strategy;
the soft load foreground is used for providing a configuration page for the soft load operation and maintenance personnel so as to adjust a specific soft load strategy;
the route execution unit is used for acquiring specific load strategy configuration and completing specific load balancing actions.
Specifically, the Ingress Reporter Controller is deployed on the service K8S cluster, the soft load background is deployed independently, the soft load foreground is also deployed independently, the soft load foreground may also be called as a configuration foreground, the soft load foreground also supports setting of an overall load balancing policy, and supports independent load policy setting for a specific case to complete a specific drainage operation. The route execution unit is also independently deployed, supports grouping deployment and acquires different load strategy sets according to groups.
CRD (Custom Resource Definition) is a Resource extension mode recommended by kubernets as much as possible. Through the CRD technology, the system provided by the application can register the user-defined resource (Ingress Reporter) to the kubernets system, and can create, view, modify, delete and the like the user-defined resource by using the native resource (such as Pod and stateful set). In addition, through the CRD mechanism, the acquisition and validation of Ingress Reporter configuration information (i.e. the reported target address, such as the message queue address of the soft load background and the corresponding topic) can be completed.
The application provides a cross-cluster load balancing system in a cloud native scene, which can realize unified load balancing strategy management and control in a multi-cluster deployment scene, so that cross-cluster load balancing configuration can be solved, the information of the application cross-cluster deployment can be obtained in a cloud native mode, configuration items can be automatically generated, the logic grouping of business services can be supported, and the capability of replacing the original hardware load balancing scheduling in a soft load mode is supported. In addition, the system of the application simplifies the filling of the load balancing configuration items of the operation and maintenance personnel through the provided Ingress Reporter Controller, can automatically synchronize the Ingress information of the lower-level service cluster, and can realize the load bearing capacity of the cross-cluster load balancing diversity group through the grouping capacity.
The following describes the technical solution of the embodiment of the present application in detail:
the cross-cluster load balancing system in the embodiment of the application is an application cross-cluster load balancing system in a cloud native environment, and mainly comprises four parts, as shown in fig. 2.
Ingress report Controller: the configuration information reporting method is deployed on a service K8S cluster, and is responsible for acquiring Ingress configuration information on a service cluster (such as a service K8S cluster in fig. 2) together with a CRD (Custom Resource Definition), for example, the Ingress configuration information on the service cluster can be acquired through an API Server, and according to the configuration of the Custom Resource cluster, the acquired Ingress configuration information is reported to a corresponding soft load background, for example, Ingress information is reported in fig. 2, and the third in fig. 2 refers to that an Ingress Reporter Controller reports the Ingress configuration information to the soft load background.
Soft loading of a background: the routing policy management module is independently deployed and used for synchronizing Ingress configuration information of each service cluster, merging and managing the Ingress information of the same domain name (namely, summarizing the Ingress information in fig. 2), providing a default routing policy, namely, providing the default routing policy to the routing execution unit, and dynamically acquiring soft routing information as in fig. 2, wherein the sixth in fig. 2 means that the routing execution unit dynamically acquires the soft routing information from a soft load background. Meanwhile, the soft load background also supports the configuration foreground to modify the specific service routing strategy and the routing execution unit to acquire the corresponding routing configuration strategy.
Soft load foreground: and the independent deployment is used for providing a configuration page for the soft load operation and maintenance personnel to adjust the specific soft load strategy, for example, the fifth step of adjusting the soft routing strategy in fig. 2 is that the soft load operation and maintenance personnel may adjust the soft routing strategy through the soft load foreground. The soft load foreground also supports setting of an overall load balancing strategy and independent load strategy setting for special cases so as to complete specific drainage operation.
The route execution unit: the routing executing unit may dynamically acquire the soft routing information (or the routing policy) from the soft load background, for example, as shown in fig. 2, the soft routing information is dynamically acquired to achieve the acquisition of the specific load policy configuration. In other words, the route execution unit may obtain a specific load policy configuration according to the obtained specific route policy, and further perform load balancing and complete a specific load balancing operation.
The route execution unit further supports packet deployment and acquires different load policy sets by group, such as a packet 1 and a packet 2 in fig. 2, where the packet 1 includes two different route execution units (such as a route execution unit a and a route execution unit B), the packet 2 includes another two different route execution units (such as a route execution unit C and a route execution unit D), and the route execution unit in the packet 1 and the route execution unit in the packet 2 may acquire different load policy sets by group. After the route execution unit obtains the load policy, the route execution unit may perform policy routing according to the obtained load policy, that is, the routing is according to the policy by the ((r) policy in fig. 2).
The principle of the specific operation is described below:
1. the service personnel creates a CRD and a CRD Controller (for example, create a CRD and a Controller in fig. 2), and the CRD (Custom Resource Definition) is a Resource extension mode recommended by kubernets. Through the CRD technology, the system can register the custom resource (such as Ingress Reporter) in the kubernets system, namely, the Ingress Reporter is registered in the service cluster (such as kubernets) through the CRD, and the custom resource is created, viewed, modified, deleted and the like using the native resource (such as Pod and stateful set). Through the CRD mechanism, the acquisition and the effect of the Ingress Reporter configuration information (namely the reported target address, such as the message queue address of the soft load background and the corresponding topic) can be completed.
The CRD is a secondary development capability added after Kubernets 1.7 to expand Kubernets API, a new resource type can be added into the Kubernets API through the CRD without modifying Kubernets source codes or creating a self-defined API server, and the function greatly improves the expansion capability of the Kubernets. CRD is a mode that native kubenetes API interface can be expanded without coding, and is suitable for expanding custom interface and function of kubenetes.
When a new Custom Resource Definition (CRD) is created, the kubernets API Server responds by creating a new RESTful resource path, whether in namespace or cluster, as specified in the scope domain of the CRD. As with existing built-in objects, deleting a namespace will delete all custom objects within the namespace. The CRD itself does not distinguish between namespaces, and is available for all namespaces.
In practical applications, Custom resources may be added to a cluster in two ways: (1) custom Resource Definitions (CRDs): the method is easier to use and does not need encoding; (2) API Aggregation: there is a need for coding that allows for more customized implementations by way of aggregation layers.
2. Implementation of a custom Controller (e.g., Ingress Reporter Controller). As shown in fig. 3, the custom Controller actually queries and listens for the Service and Ingress objects on the Service cluster (e.g., K8S) through the Api provided by Api-server of Kubernetes. Wherein, the realization of the monitoring function can be simplified through the k8s.io/client-go/informamers. Equivalently, the Ingress report Controller queries and listens for a target object on the Service cluster through an API provided by the API-Server of Kubernetes, where the target object includes, but is not limited to, Service, Ingress, and the like.
3. Through the implementation of the steps 1 and 2, after a service person deploys the Ingress Reporter Controller to the service cluster (for example, fig. 2 shows that the service person configures Ingress), and then after the configuration information of the Ingress Reporter is sent to the service cluster through a command line, the Ingress Reporter Controller starts to monitor the new information and the change information of the Ingress configuration of the service cluster, and after the service system completes the configuration of the Ingress, the Ingress Reporter Controller reports the configuration information (including the new information and the change information of the Ingress configuration) to the soft load background.
In other words, as can be seen from the implementation of steps 1 and 2 above, the Ingress Reporter Controller is configured to be responsible for acquiring Ingress configuration information on the service cluster together with the customized resource definition CRD, and includes: registering an entry report Ingress Reporter to the service cluster through the CRD; then, after the Ingress Reporter Controller is deployed to the service cluster and the configuration information of the Ingress Reporter is sent to the service cluster, the Ingress Reporter Controller starts to monitor the Ingress configuration change information of the corresponding service in the service cluster.
4. After receiving Ingress configuration information reported by an Ingress Reporter Controller deployed on each service cluster, the soft load background collects different routing addresses of the same service. There are two ways to merge the same service, one way is by domain name, port and path. Another way is to do the association or merging based on the service identity (e.g., application coding). It is required to interface with other systems (e.g., CMDB) to obtain the association of the application code and the corresponding Ingress. Equivalently, the soft load background performs merging management on Ingress configuration information of the same domain name, and the merging management comprises the following steps: after receiving the Ingress configuration information reported by the Ingress repeater Controller deployed on each service cluster, the soft load background merges and manages the Ingress configuration information according to the domain name, the port and the path, or merges and manages the Ingress configuration information according to the identification information (such as application code) of the service, for example, if two pieces of the reported Ingress configuration information both contain the same service identification (such as the same application code), the two pieces of the Ingress configuration information are merged and managed.
5. The soft load background supports the soft load foreground to carry out service configuration addition, deletion, modification and check. The specific service configuration includes configuration of a service application and configuration of a service platform, where the configuration of the service application includes, but is not limited to, configuration of an application tag, an application backend access information list (for example, in the form of Ingress address + port + path), a routing policy of an application (for example, weight of backend access, load balancing policy, and the like), an independent access routing policy (for example, specific routing is performed based on a request header, a parameter, a cookie, and the like), application home packet information, and the like; the configuration of the service platform includes, but is not limited to, login authentication, role authority control, application batch switching, application grouping setting, home grouping of the route execution unit, and the like. In addition, the soft load background may also provide APIs for external systems (e.g., CMDB) to call.
That is, the soft load background is further configured to support the soft load foreground to perform at least one of adding, deleting, modifying, or viewing of the service configuration; the service configuration comprises configuration of service application and configuration of a service platform, wherein the configuration of the service application comprises at least one of the following items: configuring an application label, configuring an application back-end access information list, configuring an application routing strategy, configuring an application drainage strategy and configuring an application attribution grouping; the configuration of the service platform comprises at least one of the following: login authentication configuration, role authority control configuration, application batch switching configuration, application grouping setting and attribution grouping configuration of the route execution unit.
6. The soft load foreground provides a Web access entrance and can support user login authentication, role authority control, application label modification, application grouping setting, application batch switching, attribution grouping of the route execution units, general application route strategy configuration, independent application drainage strategy configuration and the like. If the soft routing policy is adjusted according to the fifth step in fig. 2, the soft load operation and maintenance personnel may adjust the soft routing policy through the soft load foreground.
The application grouping refers to grouping and managing a plurality of service applications (for example, 2, 3 or more service applications). If there are 3 service applications simultaneously, namely service application a, service application B and service application C, then: the service application a and the service application B may be divided into one group (for example, referred to as an application group 1) and the service application C may be divided into another group (for example, referred to as an application group 2) according to a requirement, or the service application a and the service application C may be divided into one group (for example, referred to as an application group 1) and the service application B may be divided into another group (for example, referred to as an application group 2) according to a requirement, and of course, the service application a, the service application B, and the service application C may be grouped in other possible situations according to a requirement, which is not limited in the embodiment of the present application.
The home grouping of the route execution unit refers to grouping management or grouping deployment of the route execution unit. If there are 3 route execution units, namely, the route execution unit a, the route execution unit B, and the route execution unit C, then: the route execution unit a and the route execution unit C may be divided into one group according to the requirement (for example, referred to as a route packet 1), and the route execution unit B may be divided into another group (for example, referred to as a route packet 2), or the route execution unit B and the route execution unit C may be divided into one group according to the requirement (for example, referred to as a route packet 1), and the route execution unit a may be divided into another group (for example, referred to as a route packet 2), and of course, the route execution unit a, the route execution unit B, and the route execution unit C may be grouped in other possible situations according to the requirement, which is not limited in the embodiment of the present application.
7. The function specifically completed by the route execution unit is the specific route policy execution after the user accesses, and it can be implemented through dynamic configuration, for example, when the route policy is executed each time, the specific route policy is queried, and the specific route policy is determined to be executed through the matching condition (that is, the query condition or query result of the route policy). Specifically, the method can be divided into two processes, one is a dynamic configuration validation process, and the other is a routing information determination process for user access. The dynamic configuration process may be understood as a process of dynamically configuring a specific routing policy, and the routing information determination process accessed by the user may be understood as a process of determining a routing address to be accessed according to a query result of the routing policy. In other words, the functions specifically performed by the route execution unit include: by dynamically configuring a specific routing strategy, the specific routing strategy is inquired each time when the routing strategy is executed, and a routing address to be accessed is determined according to an inquiry result.
Considering that not only the service applications (which may also be referred to as applications or services) can be managed in groups, i.e., the service applications can be managed in groups, such as the above-mentioned division of a plurality of service applications (service application a, service application B, and service application C) into different application groups (which may also be referred to as service groups), but also the routing execution units can be managed in groups, such as the above-mentioned division of a plurality of routing execution units (routing execution unit a, routing execution unit B, and routing execution unit C) into different routing groups, such as the division of 4 routing execution units into different routing groups in fig. 2 (i.e., group 1 and group 2); then, each time the route execution unit obtains or queries a specific route configuration policy, the route execution unit may execute the following processing:
determining an application group corresponding to a routing group where a routing execution unit is located according to a corresponding relation (or mapping relation) between the routing group and the application group, and acquiring or querying a routing configuration policy of a service application in the application group, wherein each application group comprises one or more service applications, each routing group comprises one or more routing execution units, each application group respectively comprises different service applications, and each routing group respectively comprises different routing execution units.
By carrying out the processing of routing grouping on the routing execution unit and carrying out the service grouping on the service application, the routing execution unit can support the acquisition of a routing configuration strategy according to the service grouping, ensure that the routing execution unit can inquire the corresponding service grouping according to the corresponding relation between the routing grouping and the service grouping, and only acquire the routing strategy configuration of the service application in the corresponding service grouping, thereby realizing that the routing configuration strategies acquired by the routing execution units of different routing groupings are different, and further carrying out the isolation processing.
In one example, if there are a routing packet 1 (containing a routing execution unit a and a routing execution unit B) and a routing packet 2 (containing a routing execution unit C and a routing execution unit D), and there are an application packet 1 (containing a service application a) and an application packet 2 (containing a service application B and a service application C) at the same time, and there is a one-to-one correspondence between the routing packet 1 and the application packet 1 and a one-to-one correspondence between the routing packet 2 and the application packet 2, then:
when the route execution unit a acquires or queries the route configuration policy each time, the route execution unit a first associates to the application packet 1 according to the correspondence between the route packet 1 in which the route execution unit a is located and the application packet, and then the route execution unit a acquires or queries the route configuration policy of the service application a in the application packet 1. Similarly, when the route execution unit C acquires or queries the route configuration policy, the route execution unit C first associates the application packet 2 according to the correspondence between the route packet 2 where the route execution unit C is located and the application packet, and then acquires or queries the route configuration policies of the service application B and the service application C in the application packet 2.
In one possible implementation, the dynamic validation process of the configuration may include the following steps:
step 1, a route execution unit acquires configuration information of a route strategy; the process of obtaining the configuration information of the routing policy also determines, according to a corresponding relationship (or a mapping relationship) between the routing packet and the application packet, the application packet corresponding to the routing packet where the routing execution unit is located, and obtains or queries the routing configuration policy of the service application in the application packet, for which specific contents are referred to in the foregoing, which is not described herein again.
Step 2, judging whether the new domain name or the configuration information in the original domain name needs to be monitored and modified according to the configuration information;
step 3, if a new domain name needs to be monitored, the monitoring of the new domain name is completed;
step 4, determining that the configuration information of the original domain name is modified, and updating the original configuration information to the latest state (such as the latest configuration information);
step 5, executing the steps 1 to 4 regularly;
step 6, monitoring and checking the address of the rear end aiming at the acquired configuration information;
and 7, marking the abnormal address (such as abnormal marking) and marking the address information which is recovered to be normal (such as normal marking).
And 8, regularly executing the steps 6 to 7.
It should be noted that the dynamic validation process of configuration introduced in step 1 to step 8 may also be expressed as a process of dynamically configuring a specific routing policy, where the process may be: the routing execution unit periodically acquires configuration information of a routing strategy at a first preset time interval, and determines whether monitoring of a new domain name or modification of the configuration information of an original domain name is needed according to the configuration information, wherein monitoring of the new domain name is completed if the monitoring of the new domain name is determined to be needed, and the original configuration information is updated to be latest configuration information if the modification of the configuration information of the original domain name is determined; and the routing execution unit periodically monitors and checks the address information at the rear end aiming at the acquired configuration information at a second preset time interval, and performs abnormal marking on the abnormal address and performs normal marking on the address information recovered to be normal.
The process for determining the routing information accessed by the user can comprise the following steps:
step 1, a user requests to a specific route execution unit;
step 2, the route execution unit monitors corresponding address information or domain name information;
step 3, the route execution unit acquires the context information of the user request, including domain name + port + address, request head, cookie, parameters in URL, submitted data and the like;
step 4, according to the configured load strategy (specific scene drainage), matching the back-end address (namely routing address) accessible by the user;
step 5, if the corresponding routing address is not matched, inquiring a default load balancing strategy, and calculating a back end address (namely the routing address) required to be used;
and 6, redirecting the user to the calculated rear-end address required to be used.
It should be noted that the process for determining the routing information accessed by the user, which is introduced in the above step 1 to step 6, may also be expressed as a process for determining a routing address to be accessed according to a query result of a routing policy. The process of determining the routing address to be accessed according to the query result of the routing policy may be: the route execution unit determines corresponding address information or domain name information according to the query result; and then, the route execution unit acquires the request context information and matches the accessible routing address according to the inquired routing strategy, if the accessible routing address is not matched with the corresponding routing address, the route execution unit inquires a default load balancing strategy and calculates the routing address required to be used so as to redirect to the routing address required to be used.
When the external system has service system Management information and application grouping information that are the same and applied to different clusters, the cross-cluster load balancing system of the present application supports interfacing with the external system, such as a CMDB (Configuration Management Database), as shown in fig. 4. The difference between the cross-cluster load balancing system (i.e. the system shown in fig. 4) interfacing with the external system and the original cross-cluster load balancing system (i.e. the system shown in fig. 2) is as follows: part of the information (such as application tags and application grouping information) can be automatically synchronized without requiring the soft load operation and maintenance personnel to manually maintain through pages by themselves. That is, the cross-cluster load balancing system of the present application has an interface for interfacing with other external systems, and performs automatic synchronization of target information with other external systems through the interface, where the target information includes, but is not limited to, an application tag, application grouping information, and the like.
It should be noted that the soft load foreground, the soft load background, the route execution unit, and the like in the cross-cluster load balancing system of the present application support containerization deployment, and can be deployed in a deployment manner. Namely, the soft load background, the soft load foreground and the route execution unit all support containerized deployment and can be deployed in a depolyment mode.
Therefore, the embodiment of the application provides a solution for cross-cluster load balancing in a cloud native scene, and can realize uniform load balancing policy control in a scene of applying multi-cluster deployment; the filling of load balance configuration items of operation and maintenance personnel is simplified through the provided Ingress Reporter Controller, and the Ingress information of the lower-level service cluster can be automatically synchronized; by the grouping capability, the diversity group bearing capability of cross-cluster load balancing can be realized; the external system can be connected, and the filling of the load balancing configuration items is further simplified; moreover, the whole cross-cluster load balancing system supports cloud native deployment.
The foregoing is only an optional implementation manner of a part of implementation scenarios in this application, and it should be noted that, for those skilled in the art, other similar implementation means based on the technical idea of this application are also within the protection scope of the embodiments of this application without departing from the technical idea of this application.

Claims (11)

1. A cross-cluster load balancing system is characterized by comprising an entry report Controller Ingress Reporter Controller, a soft load background, a soft load foreground and a route execution unit; wherein the content of the first and second substances,
the Ingress Reporter Controller is used for acquiring Ingress configuration information on the service cluster together with the user-defined Resource definition CRD and reporting the Ingress configuration information to a soft load background according to the configuration of the user-defined Resource Custom;
the soft load background is used for synchronizing Ingress configuration information of each service cluster, merging and managing the Ingress configuration information with the same domain name, providing a default routing strategy, supporting the soft load foreground to modify a specific service routing strategy and supporting the routing execution unit to acquire a corresponding routing configuration strategy;
the soft load foreground is used for providing a configuration page for soft load operation and maintenance personnel so as to adjust a specific soft load strategy;
the route execution unit is used for acquiring specific load strategy configuration and completing specific load balancing actions.
2. The system of claim 1, wherein the Ingress Reporter Controller queries and listens for target objects on the service cluster through an API provided by an API-Server of Kubernetes.
3. The system according to claim 1 or 2, wherein the Ingress Reporter Controller is configured to be responsible for acquiring Ingress configuration information on the service cluster together with a custom resource definition CRD, and includes:
registering an entry report Ingress Reporter to the service cluster through the CRD;
after the Ingress Reporter Controller is deployed to the service cluster and the configuration information of the Ingress Reporter is sent to the service cluster, the Ingress Reporter Controller starts to monitor the new information and the change information of the Ingress configuration of the corresponding service in the service cluster.
4. The system according to any one of claims 1 to 3, wherein the soft load background performs merge management on Ingress configuration information of the same domain name, including:
after receiving the Ingress configuration information reported by the Ingress Reporter Controller deployed on each service cluster, the soft load background merges and manages the Ingress configuration information according to a domain name, a port and a path, or merges and manages the Ingress configuration information according to the identification information of the service.
5. The system according to any of claims 1-3, wherein the soft load background is further configured to support the soft load foreground for at least one of adding, deleting, modifying, or viewing of a service configuration;
the service configuration comprises configuration of a service application and configuration of a service platform, wherein the configuration of the service application comprises at least one of the following items: configuring an application label, configuring an application back-end access information list, configuring an application routing strategy, configuring an application drainage strategy and configuring an application attribution grouping;
the configuration of the service platform comprises at least one of the following: login authentication configuration, role authority control configuration, application batch switching configuration, application grouping setting and attribution grouping configuration of the route execution unit.
6. The system according to claim 1 or 5, wherein the route executing unit obtains a specific load policy configuration, including:
determining an application group corresponding to the routing group where the routing execution unit is located according to the corresponding relation between the routing group and the application group, and acquiring a routing configuration strategy of a service application in the application group;
each application group comprises one or more service applications, each routing group comprises one or more routing execution units, each application group respectively comprises different service applications, and each routing group respectively comprises different routing execution units.
7. The system according to claim 1 or 6, wherein the route performing unit is configured to obtain a specific load policy configuration and complete a specific load balancing action, and includes:
by dynamically configuring a specific routing strategy, the specific routing strategy is inquired each time when the routing strategy is executed, and a routing address to be accessed is determined according to an inquiry result.
8. The system according to claim 7, wherein said dynamically configuring a specific routing policy comprises:
the routing execution unit periodically acquires configuration information of a routing strategy at a first preset time interval, and determines whether a new domain name needs to be monitored or the configuration information of an original domain name is modified according to the configuration information, wherein if the new domain name needs to be monitored, monitoring of the new domain name is completed, and if the configuration information of the original domain name is modified, the original configuration information is updated to the latest configuration information;
and the routing execution unit periodically monitors and checks the address information at the rear end aiming at the acquired configuration information at a second preset time interval, and performs abnormal marking on an abnormal address and performs normal marking on address information recovered to be normal.
9. The system according to claim 7, wherein the determining the routing address to be accessed according to the query result of the routing policy comprises:
the route execution unit monitors corresponding address information or domain name information according to the query result;
and the route execution unit acquires the request context information and matches an accessible routing address according to the inquired routing strategy, if the routing address is not matched with the corresponding routing address, the route execution unit inquires a default load balancing strategy and calculates the routing address required to be used so as to redirect to the routing address required to be used.
10. The system according to any one of claims 1-9, wherein said soft load background, said soft load foreground and said route execution unit support containerized deployment and are capable of deployment by deployment.
11. The system according to any one of claims 1-9, wherein the system has an interface to an external other system, through which the system performs automatic synchronization of target information with the external other system.
CN202210674605.3A 2022-06-14 2022-06-14 Cross-cluster load balancing system Active CN115086330B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210674605.3A CN115086330B (en) 2022-06-14 2022-06-14 Cross-cluster load balancing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210674605.3A CN115086330B (en) 2022-06-14 2022-06-14 Cross-cluster load balancing system

Publications (2)

Publication Number Publication Date
CN115086330A true CN115086330A (en) 2022-09-20
CN115086330B CN115086330B (en) 2024-03-01

Family

ID=83252028

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210674605.3A Active CN115086330B (en) 2022-06-14 2022-06-14 Cross-cluster load balancing system

Country Status (1)

Country Link
CN (1) CN115086330B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115242877A (en) * 2022-09-21 2022-10-25 之江实验室 Spark collaborative calculation and operation method and device for multiple K8s clusters
CN115883258A (en) * 2023-02-15 2023-03-31 北京微步在线科技有限公司 IP information processing method, device, electronic equipment and storage medium
US11954525B1 (en) 2022-09-21 2024-04-09 Zhejiang Lab Method and apparatus of executing collaborative job for spark faced to multiple K8s clusters

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111343037A (en) * 2019-08-19 2020-06-26 海通证券股份有限公司 Flow monitoring method and device for cloud platform load according to application, and computer equipment
CN112995273A (en) * 2021-01-28 2021-06-18 腾讯科技(深圳)有限公司 Network call-through scheme generation method and device, computer equipment and storage medium
CN113094182A (en) * 2021-05-18 2021-07-09 联想(北京)有限公司 Load balancing processing method and device for service and cloud server
WO2021205212A1 (en) * 2020-04-08 2021-10-14 Telefonaktiebolaget Lm Ericsson (Publ) Traffic controller for cloud native deployment
CN113572831A (en) * 2021-07-21 2021-10-29 重庆星环人工智能科技研究院有限公司 Communication method between Kubernetes clusters, computer equipment and medium
US20220038311A1 (en) * 2020-07-30 2022-02-03 Vmware, Inc. Hierarchical networking for nested container clusters

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111343037A (en) * 2019-08-19 2020-06-26 海通证券股份有限公司 Flow monitoring method and device for cloud platform load according to application, and computer equipment
WO2021205212A1 (en) * 2020-04-08 2021-10-14 Telefonaktiebolaget Lm Ericsson (Publ) Traffic controller for cloud native deployment
US20220038311A1 (en) * 2020-07-30 2022-02-03 Vmware, Inc. Hierarchical networking for nested container clusters
CN112995273A (en) * 2021-01-28 2021-06-18 腾讯科技(深圳)有限公司 Network call-through scheme generation method and device, computer equipment and storage medium
CN113094182A (en) * 2021-05-18 2021-07-09 联想(北京)有限公司 Load balancing processing method and device for service and cloud server
CN113572831A (en) * 2021-07-21 2021-10-29 重庆星环人工智能科技研究院有限公司 Communication method between Kubernetes clusters, computer equipment and medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115242877A (en) * 2022-09-21 2022-10-25 之江实验室 Spark collaborative calculation and operation method and device for multiple K8s clusters
CN115242877B (en) * 2022-09-21 2023-01-24 之江实验室 Spark collaborative computing and operating method and device for multiple K8s clusters
US11954525B1 (en) 2022-09-21 2024-04-09 Zhejiang Lab Method and apparatus of executing collaborative job for spark faced to multiple K8s clusters
CN115883258A (en) * 2023-02-15 2023-03-31 北京微步在线科技有限公司 IP information processing method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115086330B (en) 2024-03-01

Similar Documents

Publication Publication Date Title
KR102604082B1 (en) Multi-cluster ingress
CN109032755B (en) Container service hosting system and method for providing container service
CN109618005B (en) Method for calling server and proxy server
CN107070972B (en) Distributed file processing method and device
CN115086330B (en) Cross-cluster load balancing system
US8200789B2 (en) Method, system and program product for automated topology formation in dynamic distributed environments
CN111464592A (en) Load balancing method, device, equipment and storage medium based on microservice
CN107493191B (en) Cluster node and self-scheduling container cluster system
CN113778623B (en) Resource processing method and device, electronic equipment and storage medium
CN114070822B (en) Kubernetes Overlay IP address management method
CN110391940B (en) Service address response method, device, system, equipment and storage medium
CN109525590B (en) Data packet transmission method and device
CN112698992B (en) Disaster recovery management method and related device for cloud cluster
CN113079098B (en) Method, device, equipment and computer readable medium for updating route
CN101534255A (en) A method and device for realizing oriented processing of certain request
CN112655185B (en) Apparatus, method and storage medium for service allocation in a software defined network
CN109005071B (en) Decision deployment method and scheduling equipment
KR20140097717A (en) Resource Dependency Service Method for M2M Resource Management
CN113709054A (en) Keepallved-based LVS (Low Voltage differential Signaling) system deployment adjusting method, device and system
CN112910796A (en) Traffic management method, apparatus, device, storage medium, and program product
US20200293386A1 (en) Messaging abstraction layer for integration with message oriented middleware platforms
CN115378993B (en) Method and system for supporting namespace-aware service registration and discovery
CN111404980B (en) Data storage method and object storage system
CN108965494A (en) Data transmission method and device in data system
CN114615268B (en) Service network, monitoring node, container node and equipment based on Kubernetes cluster

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant