CN113326100B - Cluster management method, device, equipment and computer storage medium - Google Patents

Cluster management method, device, equipment and computer storage medium Download PDF

Info

Publication number
CN113326100B
CN113326100B CN202110722748.2A CN202110722748A CN113326100B CN 113326100 B CN113326100 B CN 113326100B CN 202110722748 A CN202110722748 A CN 202110722748A CN 113326100 B CN113326100 B CN 113326100B
Authority
CN
China
Prior art keywords
node
nodes
cluster
pool
control node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110722748.2A
Other languages
Chinese (zh)
Other versions
CN113326100A (en
Inventor
洪亚苹
杨旭荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sangfor Technologies Co Ltd
Original Assignee
Sangfor Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sangfor Technologies Co Ltd filed Critical Sangfor Technologies Co Ltd
Priority to CN202110722748.2A priority Critical patent/CN113326100B/en
Publication of CN113326100A publication Critical patent/CN113326100A/en
Application granted granted Critical
Publication of CN113326100B publication Critical patent/CN113326100B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Hardware Redundancy (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the application discloses a cluster management method, a cluster management device, cluster management equipment and a computer storage medium, wherein the method comprises the following steps: the balancing scheduler distributes the acquired multiple service processing requests to target nodes in a stateless node pool of the cluster in a balancing manner; each target node processes the received service processing request; and each target node accesses a control node in a stateful node pool of the cluster under the condition that the public data of the cluster needs to be accessed when processing the received service processing request, so as to complete the processing of the service processing request.

Description

Cluster management method, device, equipment and computer storage medium
Technical Field
The embodiment of the application relates to the technical field of Internet, and relates to a cluster management method, a cluster management device, cluster management equipment and a computer storage medium.
Background
In the existing cluster management method, when nodes are offline, new available clusters are formed by voting, and when the number of the nodes is large, the convergence time of an election mechanism is long, so that the service transfer is slower, and the service is affected; under the condition of processing service processing requests, all requests of a client are sent to a master node, and the master node performs load balancing and distributes the requests to other nodes in the cluster for processing, so that the master node has high pressure, and when the concurrent request quantity is high, the problem that a single node cannot load exists.
Disclosure of Invention
In view of this, embodiments of the present application provide a cluster management method, apparatus, device, and computer storage medium.
The technical scheme of the embodiment of the application is realized as follows:
in a first aspect, an embodiment of the present application provides a cluster management method including: the balancing scheduler distributes the acquired multiple service processing requests to target nodes in a stateless node pool of the cluster in a balancing manner; each target node processes the received service processing request; and each target node accesses a control node in a stateful node pool of the cluster under the condition that the public data of the cluster needs to be accessed when processing the received service processing request, so as to complete the processing of the service processing request.
In a second aspect, an embodiment of the present application provides a cluster apparatus, where the apparatus includes: the balancing scheduler is used for distributing the acquired multiple service processing requests to target nodes in the stateless node pool of the cluster in a balancing manner; each target node is used for processing the received service processing request; and each target node is further configured to, when determining that the public data of the cluster needs to be accessed during processing the received service processing request, access a control node in a stateful node pool of the cluster to complete processing of the service processing request.
In a third aspect, an embodiment of the present application provides an electronic device, including a memory and a processor, where the memory stores a computer program executable on the processor, and where the processor implements the method described above when executing the program.
In a fourth aspect, embodiments of the present application provide a computer storage medium storing executable instructions for causing a processor to perform the above-described method.
In the embodiment of the application, the balanced scheduling is realized through the balanced scheduler, so that the problem of overlarge pressure of a single node in the cluster is effectively avoided. By classifying the cluster nodes into a stateless node pool and a stateful node pool, the target nodes in the stateless node pool are used for processing the service processing request, and the control nodes in the stateful node pool are used for storing the common data of the cluster, so that the processing efficiency of the service processing request is effectively improved.
Drawings
FIG. 1A is a schematic diagram of a system architecture for cluster management according to an embodiment of the present disclosure;
fig. 1B is a flow chart of a cluster management method according to an embodiment of the present application;
fig. 1C is a schematic diagram of a cluster node according to an embodiment of the present disclosure;
FIG. 1D is a schematic flow chart of an equilibrium scheduler for scheduling processing nodes in a stateless node pool according to an embodiment of the present disclosure;
FIG. 1E is a flow chart illustrating a process of an equilibrium scheduler in the event of a failure of a processing node in a stateless node pool according to an embodiment of the present application;
FIG. 2A is a schematic diagram of a configuration interface according to an embodiment of the present application;
fig. 2B is a schematic diagram of a cluster service distribution according to an embodiment of the present application;
FIG. 2C is a schematic diagram of a stateless node pool node and a stateful node pool node provided in an embodiment of the present application;
FIG. 2D is a schematic flow chart of switching data storage services in the case of a failure of a control node in a stateful node pool according to an embodiment of the present application;
FIG. 2E is a schematic diagram of a process flow in a case where a control node in a stateful node pool fails according to an embodiment of the present application;
FIG. 2F is a schematic diagram of a node in an extended stateful node pool, provided in an embodiment of the present application;
fig. 3A is a schematic flow chart of a scenario of accessing a cloud platform by a user according to an embodiment of the present application;
FIG. 3B is a flowchart illustrating a method for processing an access request according to an embodiment of the present disclosure;
fig. 3C is a method for forwarding a service request by a proxy client under a condition that a control node is changed according to an embodiment of the present application;
FIG. 3D is a method for recovering a stateful node pool when a master control node is detected to be offline, provided in an embodiment of the present application;
fig. 4 is a schematic diagram of a composition structure of a cluster management device according to an embodiment of the present application;
fig. 5 is a schematic diagram of a hardware entity of an electronic device according to an embodiment of the present application.
Detailed Description
For the purposes, technical solutions and advantages of the embodiments of the present application to be more apparent, the following detailed description of the specific technical solutions of the present invention will be further described with reference to the accompanying drawings in the embodiments of the present application. The following examples are illustrative of the present application, but are not intended to limit the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
In the following description, the terms "first", "second", "third" and the like are merely used to distinguish similar objects and do not represent a specific ordering of the objects, it being understood that the "first", "second", "third" may be interchanged with a specific order or sequence, as permitted, to enable embodiments of the application described herein to be practiced otherwise than as illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
Before further describing embodiments of the present application in detail, the terms and expressions that are referred to in the embodiments of the present application are described, and are suitable for the following explanation.
Virtual internetworking protocol (Internet Protocol, IP): the virtual IP is generally used for a scene of high availability of service, and when a main server fails and can not provide service to the outside, the virtual IP is dynamically switched to a standby server, so that a user does not feel the failure.
haproxy: for providing high availability, load balancing, and application proxy based on transmission control protocol (Transmission Control Protocol, TCP) and hypertext transfer protocol (Hypertext Transfer Protocol, HTTP).
pacemaker: a cluster resource manager. It utilizes the message and member management capabilities provided by the cluster infrastructure (heartbeat or corosync) to detect and recover from node or resource level failures to achieve cluster service maximum availability.
corosync: and a part of the cluster management suite is used for collecting information such as heartbeat among nodes and providing the node availability condition for an upper layer.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
It should be understood that some embodiments described herein are merely used to explain the technical solutions of the present application, and are not used to limit the technical scope of the present application.
Fig. 1A is a schematic system architecture of cluster management provided in an embodiment of the present application, and as shown in fig. 1A, the schematic system architecture includes an balancing scheduler 11 and a cluster node pool 12, where the cluster node pool 12 includes a stateless node pool 121 and a stateful node pool 122, and where,
and the balancing scheduler 11 is used for providing an access entry, balancing and scheduling the access request load to the nodes in the stateless node pool, and removing abnormal nodes in the stateless node pool according to the acquired internal load condition and the internal service state in the stateless node pool.
Stateless node pool 121, which may provide access services to underlying traffic, may not store data in the case of access services provided at any time, may include processing node 1, processing node 2, processing node 3, and processing node 4. The processing nodes can be destroyed or created at will, and the data of the user cannot be lost under the condition of destroying the processing nodes; in the case of processing access service, different processing nodes can be switched and used at will, and the access service of the user is not affected. In the implementation process, the nodes of the stateless node pool can be determined according to the operation speed of the nodes, for example, the nodes meeting the operation speed requirement can be determined as the nodes of the stateless node pool according to actual needs.
The stateful node pool 122, for service storage data, may include control node 1, control node 2, and control node 3. The control node can be used for storing the public data of the cluster, and the control node cannot be destroyed at will. In the implementation process, the node of the stateful node pool can be determined according to the storage performance and the operation speed of the node, for example, the node meeting the requirement of the storage performance and the operation speed can be determined as the node of the stateful node pool according to actual needs.
As shown in fig. 1B, the method for cluster management provided in the embodiment of the present application includes:
step S110, the balancing scheduler distributes the acquired multiple service processing requests to target nodes in a stateless node pool of the cluster in a balanced manner;
the load balancing of the balancing scheduler means that when one node cannot support the existing access quantity, a plurality of nodes can be deployed to form a cluster, and then service processing requests are distributed to each node under the cluster through load balancing, so that the purpose that the plurality of nodes share the request pressure together is achieved.
In some embodiments, as shown in fig. 1A, the balancing scheduler 11 is configured to provide access entries for users to access the cluster, and load-balance-schedule access requests of the users to target nodes (processing nodes) in the stateless node pool 121.
Fig. 1C is a schematic diagram of a cluster node according to an embodiment of the present application, as shown in fig. 1C, where the schematic diagram includes: nodes in the stateful node pool (control node 1, control node 2, and control node 3) and nodes in the stateless node pool (processing node 1, processing node 2, processing node 3 … … processing node n), wherein,
the nodes (control node 1, control node 2 and control node 3) in the stateful node pool are a specified number of nodes selected from the whole cluster nodes, and are used as the nodes of the stateful node pool and are responsible for running the storage service of the public data. Thus, compared with the prior art, the number of nodes in the stateful node pool is small, and the reelect convergence time is short when the nodes are in failure.
The nodes in the stateless node pool (processing node 1, processing node 2, processing node 3, … … processing node n) are all the nodes in the entire cluster node except the nodes in the stateful node pool.
Step S120, each target node processes the received service processing request;
in some embodiments, the target node may not store data in the event of access to the service provided at any time. The target node may complete the processing of the service processing request without accessing the common data of the cluster.
Step S130, when each target node processes the received service processing request, it determines that access to the common data of the cluster is required, and accesses a control node in the stateful node pool of the cluster, so as to complete processing of the service processing request.
In some embodiments, as shown in fig. 1A, a control node in the stateful node pool is configured to store the cluster in public data, and in a case that a target node in the stateless node pool determines that the public data needs to be accessed, the control node may be accessed to complete processing of a service processing request.
In the embodiment of the application, the balanced scheduling is realized through the balanced scheduler, so that the problem of overlarge pressure of a single node in the cluster is effectively avoided. By classifying the cluster nodes into a stateless node pool and a stateful node pool, the target nodes in the stateless node pool are used for processing the service processing request, and the control nodes in the stateful node pool are used for storing the common data of the cluster, so that the processing efficiency of the service processing request is effectively improved.
The step S110 "the balancing scheduler distributes the acquired plurality of service processing requests to the target nodes in the stateless node pool of the cluster in a balanced manner" may be implemented by:
Step 1101, the management component of the cluster configures a virtual internet protocol address in the balancing scheduler to provide an access entry for the service processing request;
in some embodiments, a cluster may provide a fixed access portal to the outside, such that no node changes inside the cluster or no modifications in internet protocol addresses result in changes to the outside request portal. The virtual internet protocol address may be configured at the balancing scheduler.
Step 1102, the balancing scheduler acquires load information and service state information of each node in the stateless nodes;
in some embodiments, the load information of the node may include the status of node resource consumption, and the metrics of the load information include the processing power of the central processing unit (Central Processing Unit, CPU), the CPU utilization, the length of the ready queue of the CPU, the available space of the disk and the memory, the process response time, and so on.
The service state information may include information of whether a node is available, in which case the service state information may be a node failure, and in which case the service state information may be a node available.
The balancing scheduler may obtain load information and service state information for each of the stateless nodes.
In step S1103, the balancing scheduler distributes the service processing requests to the target nodes in the stateless node pool in a balanced manner according to the load information and the service state information of each node in the stateless node pool.
In some embodiments, as shown in fig. 1D, a flow chart of an equilibrium scheduler provided in an embodiment of the present application for scheduling processing nodes in a stateless node pool, the scheduling flow includes the following steps:
step 1, a balance scheduler configures virtual IP;
step 2, configuring processing nodes in the stateful node pool into a cluster node list by the balanced scheduler configuration;
step 3, the processing nodes in the cluster node list report own load information and service state information to the balance scheduler at regular time;
and 4, the balancing scheduler reports the internal load condition and the internal service state according to the timing of the processing node, and distributes the access request of the user to the target node in the stateful node pool in a balancing manner.
In some embodiments, as shown in fig. 1E, in a case of a processing node failure in a stateless node pool, a processing flow chart of an equilibrium scheduler is provided, where the flow includes:
Step 1, under the condition that a processing node 1 in a stateful node pool fails or the load is higher, the processing node 1 stops reporting data to an equilibrium scheduler;
step 2, the balanced scheduler stops distributing the access request to the processing node 1;
and 3, distributing the access request to available processing nodes (target nodes) in other stateful node pools by the equilibrium scheduler, wherein the available processing nodes are nodes with normal state of the reporting equilibrium scheduler.
In the embodiment of the application, because the virtual internet protocol address is configured in the balanced scheduler, the fault of the nodes in the cluster cannot cause the drift of the virtual internet protocol address, and the service recovery time is short; the balanced scheduler can uniformly distribute a plurality of service processing requests to target nodes in the stateless node pool according to the load information and the service state information of each node in the stateless node pool, so that the problem that the service request processing pressure of a certain node is overlarge can be effectively avoided.
The step S1103 "the balancing scheduler distributes the plurality of service processing requests to the target node in the stateless node pool according to the load information and the service state information of each node in the stateless node pool" may be implemented by:
Step S1121, the balancing scheduler determines a node in the stateless node pool, where the service state information is non-failure, as a node to be allocated;
in some embodiments, the service state information is a non-failed processing node, i.e., an available processing node, such that the balancing scheduler determines these available processing nodes as to-be-allocated nodes that can fulfill the traffic processing request.
Step S1122, the balancing scheduler determines a node to be allocated, where the load information meets the load requirement, as the target node;
in some embodiments, the load requirement may be set according to the actual situation, and the node with excessive load, that is, the node that does not meet the load requirement, where the load of the node meets the load requirement may be determined as the target node.
Step S1123, the balancing scheduler distributes the service processing requests to the target node in a balanced manner.
In the embodiment of the application, the balanced scheduler determines the node with the service state information in the stateless node pool being non-fault as the node to be distributed, and determines the node to be distributed with the load information meeting the load requirement as the target node, so that the obtained target node can effectively complete the service processing request.
The embodiment of the application provides a method for determining a master control node, a standby control node and a slave control node in control nodes, which comprises the following steps:
step 201, a management component of the cluster acquires a preset total number of control nodes;
step 202, the management component determines the number of the slave control nodes according to the total number of the control nodes;
in some embodiments, a master control node, a standby control node and a plurality of slave control nodes may be provided, and under the condition that the total number of the preset control nodes is determined, the number of the slave control nodes may be obtained by subtracting the master control node and the standby control node.
Step 203, the management component obtains a performance index of each node of the cluster, wherein the performance index comprises a storage performance of the node and an operation speed of the node;
and 204, the management component determines a node with the operation speed meeting the first operation condition as the master control node, a node with the operation speed meeting the second operation condition as the backup control node, and a node with the operation speed meeting the third operation condition and the quantity meeting a quantity threshold as the slave control nodes in the nodes with the storage performance meeting the storage condition, wherein the quantity threshold is determined according to the quantity of the slave control nodes.
In the embodiment of the present application, a master control node, a slave control node and a slave control node may be determined according to storage performance and operation speed of the nodes, where the master control node is used to provide common data of a cluster, the slave control node is used to backup common data of the cluster, and the slave control node is used to replace the slave control node under the condition that the slave control node fails. In this way, it is ensured that in case of processing service access requiring access to common data of the clusters, efficient common data of the clusters are provided.
The embodiment of the application provides another method for determining a master control node, a standby control node and a slave control node in control nodes, which comprises the following steps:
step 210, the management component presents a configuration interface, wherein the configuration interface is used for configuring the master control node, the standby control node and the slave control node;
in some embodiments, a user may configure the master, backup, and slave control nodes by clicking on the add node control 21, as in the configuration interface shown in fig. 2A.
Step 211, the management component receives configuration operations on the master control node, the standby control node and the slave control node respectively based on the configuration interface;
Step 212, the management component determines one of the master control nodes, one of the backup control nodes, and at least one of the slave control nodes based on the configuration operation.
In this embodiment, a user may complete configuration of a master control node, a standby control node, and a slave control node in a configuration interface, and the management component determines, according to the configuration of the user, a master control node, a standby control node, and at least one slave control node in a cluster node pool.
The embodiment of the application provides a method for replacing a fault control node, which comprises the following steps:
step 220, in the case that a control node in the stateful node pool fails, the management client obtains an address of the new control node from a management component of the cluster;
in some embodiments, each node in the stateless node pool comprises a management client and a proxy client, a schematic diagram of a cluster service distribution as shown in fig. 2B, the schematic diagram comprising a traffic IP 1211, stateless traffic service 1212, proxy client 1213, and management client 1214 disposed on each processing node in the stateless node pool 121, a stateful traffic service 1221, a data storage service 1222, and a management component 1223 disposed on each control node in the stateful node pool 122, wherein,
Service IP 1211, configured to provide a real service portal IP of a stateless node, and to be configured into an available node pool of a pre-scheduler;
a stateless business service 1212 for handling access requests, but not data stores;
and a management client 1213 for resetting the destination IP of the proxy client in case of a handover of the master control node, so that the proxy client can be connected to a new active node.
And the proxy client 1214 is used for forwarding the request for accessing the public data to the master control node in the stateful node pool so as to complete the request for accessing the public data in the case that the public data needs to be accessed.
A stateful business service 1221 for providing services for manipulating public data, such as cleaning resources, generating operation and maintenance report information, etc.;
data storage service 1222 is used to store common data of clusters, such as mysql, redis, mongo, etc. databases.
A management component 1223 for maintaining nodes in the stateful node pool, notifying other stateful node pools of control nodes in the event of a primary control node failure, and reorganizing the stateful node pool.
Fig. 2C is a schematic diagram of a stateless node pool node and a stateful node pool node according to an embodiment of the present application, where the stateless node pool node includes a proxy client 1213 for accessing a data storage service of a master control node of the stateful node pool, as shown in fig. 2C.
Step 221, the management client sends the address of the new control node to the proxy client;
step 222, the proxy client modifies the address of the public data accessing the cluster to the address of the new control node.
In some embodiments, the management client sends the address of the new control node to the proxy client, as shown in fig. 2D, where the control node in a stateful node pool fails, a flow diagram of a data storage service switch, the flow comprising:
step 1, a management client receives a cluster event notification, wherein the cluster event notification is used for notifying a control node of a proxy client fault and a determined new control node;
step 2, the management client informs the agent client of the nodes in the stateless node pool, and changes the agent configuration according to the control node with the set fault and the determined new control node;
step 3, disconnecting the failed control node by the proxy client;
and 4, the proxy client establishes connection with the new control node.
In this embodiment of the present application, in the case that a control node in a stateful node pool fails, a management client obtains an address of the new control node from a management component of a cluster, sends the address of the new control node to the proxy client, and the proxy client modifies an address of public data accessing the cluster into an address of the new control node. Therefore, the proxy client is used for shielding the perception of the stateless service to the public data, the local host programming of the application service is realized, the data storage service can be migrated under the condition of the control node fault, the original data storage becomes unavailable, and the quick transfer and recovery of the fault service can be realized only by modifying the destination end (the address of the new control node) of the proxy client and reconnecting.
Step 220 "in the case of a failure of a control node in the stateful node pool, the management client obtains the address of the new control node from the management component of the cluster" may be implemented by:
step 2201, the management component of the cluster switches the standby control node to a new main control node in the nodes in the stateful node pool under the condition that the main control node is determined to be faulty by voting;
in a distributed system, a voting system can be relied upon to determine whether the entire cluster can function properly; when nodes are isolated or failed, each node can send heartbeat information to other nodes of the cluster according to whether the nodes detect the heartbeat information or not, and the fault of the nodes is determined according to the number of the votes, so that the fault of the nodes can represent the cluster work. Nodes capable of continuing to represent the cluster work can be called as multiple genres, namely, a party with more than half of the total votes; and the party with the vote less than or equal to the total vote is called the minority.
Assuming that the cluster consists of 3 nodes a/B/C, after a failure of node a or network isolation from B/C occurs, who is the only representative of the cluster to work? If a is to represent cluster work then the entire cluster will not be available, and if B/C is to represent cluster work then cluster traffic is still available. When A fails or is isolated by a network, B sends heartbeat detection information of A to C, then C also sends heartbeat detection information of A to B, at the moment, A votes 2 votes, the total number of 3 votes in the whole cluster, and half votes are considered to be A fails, so that A does not continue to represent the cluster to work at the moment; the remaining B and C are available to continue working on behalf of the cluster, and services can be transferred to both nodes.
In some embodiments, nodes in the stateful node pool may switch the backup control passpoint to a new master control node if it is determined that the master control node fails using voting.
Step 2202, the management component determines one slave control node from a plurality of slave control nodes as a new backup control node;
step 2203, the management component selects a new slave control node from the nodes in the stateless node pool and adds the new slave control node to the stateful node pool;
step 2204, the management component sends the address of the new master control node to the management client.
Fig. 2E is a schematic process flow diagram in the case of a failure of a control node in a stateful node pool according to an embodiment of the present application, where, as shown in fig. 2E, the process includes:
step 1, a cluster management component removes a failed control node from a stateful node pool;
step 2, the cluster management component informs the management client that the control node fails;
step 3, the cluster management component moves out of the stateless node pool one stateless node determined in the stateless node pool;
and 4, adding the selected stateless nodes into a stateful node pool by the cluster management component to form a new stateful node pool.
In this embodiment, when the management component of the cluster determines that the master control node fails according to the available node election vote of the cluster, the backup control node is switched to a new master control node, and an address of the new master control node is sent to the management client. Therefore, as the number of the nodes in the stateful node pool is fixed, the stateful node pool is mainly responsible for running the storage service of the public data, compared with the prior art, the number of the control nodes is small, and the reelect convergence time is quick when the main control node fails.
The embodiment of the application provides a realization method for adding or deleting nodes in a stateless node pool, which comprises the following steps:
step 230, the management component of the cluster presents a configuration interface, wherein the configuration interface is used for adding or deleting nodes in the stateless node pool;
in some embodiments, the user may add or delete nodes in the stateless node pool by clicking on the add node control 21, as in the configuration interface described in FIG. 2A.
Step 231, the management component receives an operation of adding or deleting nodes based on the configuration interface;
step 232, the management component adds or reduces nodes in the stateless node pool based on the operations of adding or deleting nodes, so as to implement expansion or reduction of management scale.
Fig. 2F is a schematic diagram of expanding nodes in a stateless node pool according to an embodiment of the present application, and as shown in fig. 2F, the schematic diagram includes a stateless node pool 22 before expansion and a stateless node pool 23 after expansion, where,
the stateless node pool 22 before expansion includes processing nodes 1, 2 and 3 for managing 1000 resources respectively, and these nodes are used for processing all external service requests
The extended stateless node pool 23 includes processing nodes 1, 2, 3, 4 and 5 that respectively manage 1000 resources, where the processing nodes 4 and 5 are extended processing nodes.
As can be seen from fig. 2F, the processing nodes in the stateless node pool can be arbitrarily increased or decreased to perform lateral expansion, and cannot participate in voting of the available nodes of the cluster; therefore, even if the processing nodes are many, the convergence time of the election mechanism is not affected. After the number of the processing nodes is increased in scale, the large-scale resource management can be realized.
In the embodiment of the application, the processing nodes in the stateless node pool can be increased or decreased at will, so as to realize the expansion or decrease of the management scale.
The problems in cluster management in the prior art are as follows:
(1) Each node is provided with a pacifier and corosync cluster component, when the nodes are offline, the nodes need to vote to form a new available cluster, and when the number of the nodes is large, the election mechanism has long convergence time, so that the service transfer is slower and the service is affected. Therefore, any expansion of the nodes cannot be supported, and the management scale is not maintained.
(2) The master control node triggers the virtual IP drift offline, needs to reset the virtual IP at other nodes and notifies the adjacent gateway to update the address resolution protocol information due to the change of the physical address, and needs a period of time to recover, so that the service can be affected.
(3) When the client requests are proxy-requested, all requests of the client are sent to the master node, and the master node performs load balancing and distributes the requests to other nodes in the cluster for processing, so that the master node has high pressure, and when the concurrent request amount is high, a single node can not load, and the scale is not needed.
Fig. 3A is a schematic flow chart of a user accessing a cloud platform scene according to an embodiment of the present application, and as shown in fig. 3A, the schematic flow chart includes the following steps:
step S301, when a user accesses a cluster from a public network, uniformly scheduling an access request to each stateless node in the cluster through a uniform scheduler;
Step S302, all business services in the stateless node access the stateful node with the public data storage service through the proxy client.
In the embodiment of the application, the access entrance is provided through the balanced scheduler and balanced scheduling is realized, so that the problem that when a client request is made through a haproxy agent, all the requests of the client are sent to a master node first, and the master node performs load balancing distribution to other nodes in the cluster for processing, so that the master node has high pressure, and when the concurrent request amount is large, the single node possibly cannot be loaded, and the problem that the scale is not going is solved.
The cluster nodes are classified into stateless nodes and stateful nodes, so that the stateless nodes can be arbitrarily increased or decreased to realize the transverse expansion of the management scale; the method has the advantages that a small number of nodes are selected to form stateful nodes, so that the available node election convergence time of a new cluster in the case of node faults is reduced, the rapid transfer and recovery of fault service are realized, the problem that each node is provided with a pacemaker and corosync cluster component, when the nodes are offline, the nodes need to vote to form the new available cluster, and when the number of the nodes is large, the election mechanism convergence time is long, so that the service transfer is slower and the service is influenced is solved. Therefore, any expansion of the nodes cannot be supported, and the management scale is not maintained.
Fig. 3B is a flowchart of a method for processing an access request according to an embodiment of the present application, as shown in fig. 3B, where the method includes:
step S311, the target node stateless service in the stateless node pool receives an external access request;
step S312, the target node in the stateless node pool determines whether the access request needs to access the public data;
in the case where it is determined that the access to the common data is not required, the process goes to step S314; in the case where it is determined that the public data needs to be accessed, the process goes to step S313.
Step S313, forwarding an access request by a proxy client of a target node in the stateless node pool under the condition that the public data is determined to be required to be accessed;
step S314, the proxy client of the target node in the stateless node pool accesses the data storage service of the stateful node pool;
step S315, processing the access request.
In the embodiment of the application, the proxy client is used for shielding the perception of the stateless service to the public data, the local host programming of the application service is realized, and the processing logic aiming at the external request service after being distributed by the dispatcher is in the target node, so that the whole service logic architecture is simplified.
Fig. 3C is a method for forwarding a service request by a proxy client in the case of a control node change provided in the embodiment of the present application, that is, step S313 in the above embodiment, "in the case of determining that access to common data is required, the proxy client of a target node in a stateless node pool forwards an access request", and in the case of receiving a control node change message sent by a management component, the method includes the following steps:
Step S3131, the proxy client receives a control node change message, wherein the control node change message comprises a failed control node and a determined new control node;
step S3132, the proxy client informs the proxy client of the node in the stateless node pool, and the external service request is blocked and not forwarded;
step S3133, the proxy client changes the destination address of the forwarding service request into the address of the new control node;
step S3134, the proxy client recovers and retries forwarding the stuck request.
In the embodiment of the present application, in the case of a failure of a master control node in a stateless node pool, migration may occur to a data storage service on the failed master control node, and the original data storage becomes unavailable, and only the destination end of the proxy client of the processing node in the stateless node pool needs to be modified and reconnected. The processing node internal service in the stateless node pool has no sense on the main control node fault in the stateful node pool, and the client service is not affected.
Fig. 3D is a method for recovering a stateful node pool when a main control node is detected to be offline, where, as shown in fig. 3D, the method includes:
Step S321, the management component detects that the main control node is offline;
step S322, the management component moves the failed master control node out of the stateful node pool;
in some embodiments, the management component determines an offline master control node as a failed master control node, and moves the failed master control node out of the pool of stateful nodes.
Step S323, the management component switches the standby control node to the main control node;
step S324, the management component informs the cluster client that the main control node changes;
step S325, the management component selects a node meeting the condition from the stateless node pool and moves out of the stateless node pool;
step S326, the management component adds the selected stateless node to the stateful node pool to reorganize the stateful node pool;
step S327, recovering the stateful node pool.
In the embodiment of the application, the management component can effectively reorganize the stateful node pool under the condition of determining the failure of the main control node, and the service processing of the cluster is not affected.
Based on the foregoing embodiments, the embodiments of the present application provide a cluster management device, where the cluster management device includes each module, each module includes each sub-module, and each sub-module includes a unit, which may be implemented by a processor in an electronic device; of course, the method can also be realized by a specific logic circuit; in an implementation, the processor may be a Central Processing Unit (CPU), a Microprocessor (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
Fig. 4 is a schematic structural diagram of a cluster management device provided in an embodiment of the present application, as shown in fig. 4, where, the device 400 includes:
an balancing scheduler 401, configured to evenly distribute the acquired multiple service processing requests to a target node in a stateless node pool of the cluster;
each of the target nodes 402 is configured to process the received service processing request;
each of the target nodes 402 is further configured to, when determining that access to the common data of the cluster is required during processing of the received service processing request, access a control node 403 in a stateful node pool of the cluster to complete processing of the service processing request.
In some embodiments, the balancing scheduler is further configured to obtain load information and service state information of each of the stateless nodes; the balancing scheduler is further configured to balance and distribute the plurality of service processing requests to the target nodes in the stateless node pool according to load information and service state information of each node in the stateless node pool.
In some embodiments, the balancing scheduler is further configured to determine a node in the stateless node pool for which the service state information is non-faulty as a node to be allocated; the balancing scheduler is further configured to determine a node to be allocated, where the load information meets a load requirement, as the target node; the balancing scheduler is further configured to distribute the plurality of service processing requests to the target node in a balanced manner.
In some embodiments, the control nodes in the stateful node pool include a master control node, a standby control node, and a slave control node, and the apparatus further includes a management component of the cluster, where the management component of the cluster is configured to obtain a preset total number of control nodes; the management component is further used for determining the number of the slave control nodes according to the total number of the control nodes; the management component is further configured to obtain a performance index of each node of the cluster, where the performance index includes a storage performance of the node and an operation speed of the node; the management component is further configured to determine, among nodes whose storage performance satisfies a storage condition, a node whose operation speed satisfies a first operation condition as the master control node, determine, among nodes whose operation speed satisfies a second operation condition, a node whose operation speed satisfies a third operation condition, and determine, among nodes whose storage performance satisfies a storage condition, a node whose operation speed satisfies a number threshold as the slave control node, where the number threshold is determined according to the number of slave control nodes.
In some embodiments, the control nodes in the stateful node pool include a master control node, a standby control node, and a slave control node, and the apparatus further includes a management component of the cluster, where the management component of the cluster is configured to present a configuration interface, where the configuration interface is configured to configure the master control node, the standby control node, and the slave control node; the management component is further configured to receive configuration operations on the master control node, the standby control node and the slave control node based on the first configuration interface; the management component is further configured to determine one of the master control nodes, one of the backup control nodes, and at least one of the slave control nodes based on the configuration operation.
In some embodiments, each node in the stateless node pool comprises a management client and a proxy client, wherein in the event of a failure of a control node in the stateless node pool, the management client is configured to obtain an address of the new control node from a management component of the cluster; the management client is further configured to send an address of the new control node to the proxy client; the proxy client is configured to modify an address of public data accessing the cluster to an address of the new control node.
In some embodiments, the control nodes in the stateful node pool include a master control node, a standby control node and a slave control node, and the management component of the cluster is further configured to switch the standby control node to a new master control node when the master control node is determined to be faulty by voting among the nodes in the stateful node pool; the management component is further configured to determine one slave control node from a plurality of slave control nodes as a new backup control node; the management component is further configured to select a new slave control node from nodes in the stateless node pool, and add the new slave control node to the stateful node pool; the management component is further configured to send an address of the new master control node to the management client.
In some embodiments, the management component is further configured to present a configuration interface for adding or deleting nodes in the stateless node pool; the management component is further used for receiving the operation of adding or deleting the nodes based on the configuration interface; the management component is further configured to add or reduce nodes in the stateless node pool based on the operation of adding or deleting nodes, so as to implement expansion or reduction of management scale.
In some embodiments, the management component is further configured to configure a virtual internet protocol address at the balancing scheduler to provide access to the traffic handling request.
The description of the apparatus embodiments above is similar to that of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the device embodiments of the present application, please refer to the description of the method embodiments of the present application for understanding.
In the embodiment of the present application, if the flow control method is implemented in the form of a software functional module and sold or used as a separate product, the flow control method may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied essentially or in a part contributing to the related art in the form of a software product stored in a storage medium, including several instructions for causing an electronic device (which may be a mobile phone, a tablet computer, a notebook computer, a desktop computer, etc.) to perform all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, an optical disk, or other various media capable of storing program codes. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
Accordingly, embodiments of the present application provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the flow control method provided in the above embodiments.
Correspondingly, an electronic device is provided in the embodiment of the present application, fig. 5 is a schematic diagram of a hardware entity of the electronic device provided in the embodiment of the present application, as shown in fig. 5, where the hardware entity of the device 500 includes: comprising a memory 501 and a processor 502, said memory 501 storing a computer program executable on the processor 502, said processor 502 implementing the steps of the flow control method provided in the above embodiments when said program is executed.
The memory 501 is configured to store instructions and applications executable by the processor 502, and may also cache data (e.g., image data, audio data, voice communication data, and video communication data) to be processed or processed by various modules in the processor 502 and the electronic device 500, and may be implemented by a FLASH memory (FLASH) or a random access memory (Random Access Memory, RAM).
It should be noted here that: the description of the storage medium and apparatus embodiments above is similar to that of the method embodiments described above, with similar benefits as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and the apparatus of the present application, please refer to the description of the method embodiments of the present application for understanding.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application. The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above described device embodiments are only illustrative, e.g. the division of the units is only one logical function division, and there may be other divisions in practice, such as: multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units; can be located in one place or distributed to a plurality of network units; some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated in one unit; the integrated units may be implemented in hardware or in hardware plus software functional units.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware related to program instructions, and the foregoing program may be stored in a computer readable storage medium, where the program, when executed, performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read Only Memory (ROM), a magnetic disk or an optical disk, or the like, which can store program codes.
Alternatively, the integrated units described above may be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied essentially or in a part contributing to the related art in the form of a software product stored in a storage medium, including several instructions for causing an electronic device (which may be a mobile phone, a tablet computer, a notebook computer, a desktop computer, etc.) to perform all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a removable storage device, a ROM, a magnetic disk, or an optical disk.
The methods disclosed in the several method embodiments provided in the present application may be arbitrarily combined without collision to obtain a new method embodiment.
The features disclosed in the several method or apparatus embodiments provided in the present application may be arbitrarily combined without conflict to obtain new method embodiments or apparatus embodiments.
The foregoing is merely an embodiment of the present application, but the protection scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered in the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. A method of cluster management, the method comprising:
the balancing scheduler distributes the acquired multiple service processing requests to target nodes in a stateless node pool of the cluster in a balancing manner; the cluster nodes are divided into a stateless node pool and a stateful node pool, wherein the number of nodes in the stateless node pool is larger than the number of the designated nodes in the stateful node pool;
each target node processes the received service processing request;
And each target node accesses a control node in a stateful node pool of the cluster under the condition that the public data of the cluster needs to be accessed when processing the received service processing request, so as to complete the processing of the service processing request.
2. The method of claim 1, wherein the balancing scheduler distributes the plurality of service processing requests obtained to target nodes in a stateless node pool of the cluster in a balanced manner, comprising:
the balance scheduler obtains load information and service state information of each node in the stateless nodes;
and the balanced scheduler distributes a plurality of service processing requests to target nodes in the stateless node pool in a balanced manner according to the load information and the service state information of each node in the stateless node pool.
3. The method of claim 2, wherein the balancing scheduler distributes the plurality of service processing requests to target nodes in the stateless node pool in a balanced manner according to load information and service state information reported by nodes in the stateless node pool, comprising:
the balanced scheduler determines the nodes with the service state information being non-fault in the stateless node pool as nodes to be distributed;
The balancing scheduler determines a node to be distributed, the load information of which meets the load requirement, as the target node;
the balancing scheduler distributes a plurality of service processing requests to the target node in a balancing manner.
4. A method according to any one of claims 1 to 3, wherein the control nodes in the pool of stateful nodes comprise a master control node, a standby control node and a slave control node, the method further comprising:
the management component of the cluster obtains the preset total number of control nodes;
the management component determines the number of the slave control nodes according to the total number of the control nodes;
the management component obtains performance indexes of each node of the cluster, wherein the performance indexes comprise storage performance of the node and operation speed of the node;
the management component determines a node with the operation speed meeting the first operation condition as the main control node, determines a node with the operation speed meeting the second operation condition as the standby control node, and determines a node with the operation speed meeting the third operation condition and the number meeting a number threshold as the auxiliary control node in the nodes with the storage performance meeting the storage condition, wherein the number threshold is determined according to the number of the auxiliary control nodes.
5. A method according to any one of claims 1 to 3, wherein the control nodes in the pool of stateful nodes comprise a master control node, a standby control node and a slave control node, the method further comprising:
the management component of the cluster presents a configuration interface, and the configuration interface is used for configuring the master control node, the standby control node and the slave control node;
the management component receives configuration operations respectively to the master control node, the standby control node and the slave control node based on the configuration interface;
the management component determines one of the master control nodes, one of the backup control nodes, and at least one of the slave control nodes based on the configuration operation.
6. A method according to any of claims 1 to 3, wherein each node in the stateless node pool comprises a management client and a proxy client, the method further comprising:
in case of failure of a control node in the stateful node pool, the management client obtains an address of a new control node from a management component of the cluster;
the management client sends the address of the new control node to the proxy client;
The proxy client modifies the address of the public data accessing the cluster to the address of the new control node.
7. The method of claim 6, wherein the control nodes in the pool of stateful nodes include a master control node, a backup control node, and a slave control node, wherein the management client obtaining the address of the new control node from a management component of the cluster in the event of a failure of a control node in the pool of stateful nodes, comprises:
the management component of the cluster switches the standby control node into a new main control node under the condition that the main control node is determined to be faulty by voting among nodes in a stateful node pool;
the management component determines one slave control node from a plurality of slave control nodes as a new standby control node;
the management component selects a new slave control node from nodes in the stateless node pool and adds the new slave control node to the stateful node pool;
the management component sends the address of the new master control node to the management client.
8. The method of claim 1, wherein the method further comprises:
The management component of the cluster presents a configuration interface for adding or deleting nodes in the stateless node pool;
the management component receives the operation of adding or deleting the nodes based on the configuration interface;
the management component adds or reduces nodes in the stateless node pool based on the operations of adding or deleting nodes to achieve expansion or reduction of management scale.
9. The method of claim 1, wherein the method further comprises:
the management component of the cluster configures a virtual internet protocol address at the balancing scheduler to provide access to the service processing request.
10. A cluster management apparatus comprising an balancing scheduler, a target node in a stateless node pool of the cluster, and a control node in a stateful node pool of the cluster, characterized by:
the balancing scheduler is configured to evenly distribute the acquired multiple service processing requests to a target node in a stateless node pool of the cluster; the cluster nodes are divided into a stateless node pool and a stateful node pool, wherein the number of nodes in the stateless node pool is larger than the number of the designated nodes in the stateful node pool;
Each target node is used for processing the received service processing request;
and each target node is further configured to, when determining that the public data of the cluster needs to be accessed during processing the received service processing request, access a control node in a stateful node pool of the cluster to complete processing of the service processing request.
11. An electronic device comprising a memory and a processor, the memory storing a computer program executable on the processor, characterized in that the processor implements the steps of the method of any of claims 1 to 9 when the program is executed.
12. A computer storage medium storing executable instructions for causing a processor to perform the steps of the method of any one of claims 1 to 9.
CN202110722748.2A 2021-06-29 2021-06-29 Cluster management method, device, equipment and computer storage medium Active CN113326100B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110722748.2A CN113326100B (en) 2021-06-29 2021-06-29 Cluster management method, device, equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110722748.2A CN113326100B (en) 2021-06-29 2021-06-29 Cluster management method, device, equipment and computer storage medium

Publications (2)

Publication Number Publication Date
CN113326100A CN113326100A (en) 2021-08-31
CN113326100B true CN113326100B (en) 2024-04-09

Family

ID=77425097

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110722748.2A Active CN113326100B (en) 2021-06-29 2021-06-29 Cluster management method, device, equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN113326100B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114124903A (en) * 2021-11-15 2022-03-01 新华三大数据技术有限公司 Virtual IP address management method and device
CN115904822A (en) * 2022-12-21 2023-04-04 长春吉大正元信息技术股份有限公司 Cluster repairing method and device

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011140951A1 (en) * 2010-08-25 2011-11-17 华为技术有限公司 Method, device and system for load balancing
CN103092697A (en) * 2010-12-17 2013-05-08 微软公司 Multi-tenant, high-density container service for hosting stateful and stateless middleware components
CN103227754A (en) * 2013-04-16 2013-07-31 浪潮(北京)电子信息产业有限公司 Dynamic load balancing method of high-availability cluster system, and node equipment
WO2014054075A1 (en) * 2012-10-04 2014-04-10 Hitachi, Ltd. System management method, and computer system
CN106603592A (en) * 2015-10-15 2017-04-26 中国电信股份有限公司 Application cluster migrating method and migrating device based on service model
CN106790692A (en) * 2017-02-20 2017-05-31 郑州云海信息技术有限公司 A kind of load-balancing method and device of many clusters
CN107925876A (en) * 2015-08-14 2018-04-17 瑞典爱立信有限公司 Node and method for the mobile process for handling wireless device
CN109343963A (en) * 2018-10-30 2019-02-15 杭州数梦工场科技有限公司 A kind of the application access method, apparatus and relevant device of container cluster
US10216770B1 (en) * 2014-10-31 2019-02-26 Amazon Technologies, Inc. Scaling stateful clusters while maintaining access
CN110727709A (en) * 2019-10-10 2020-01-24 北京优炫软件股份有限公司 Cluster database system
CN110798517A (en) * 2019-10-22 2020-02-14 雅马哈发动机(厦门)信息系统有限公司 Decentralized cluster load balancing method and system, mobile terminal and storage medium
KR102112047B1 (en) * 2019-01-29 2020-05-18 주식회사 리얼타임테크 Method for adding node in hybride p2p type cluster system
EP3702918A1 (en) * 2007-04-25 2020-09-02 Alibaba Group Holding Limited Method and apparatus for cluster data processing
CN112015544A (en) * 2020-06-30 2020-12-01 苏州浪潮智能科技有限公司 Load balancing method, device and equipment of k8s cluster and storage medium
CN112445623A (en) * 2020-12-14 2021-03-05 招商局金融科技有限公司 Multi-cluster management method and device, electronic equipment and storage medium
CN112492022A (en) * 2020-11-25 2021-03-12 上海中通吉网络技术有限公司 Cluster, method, system and storage medium for improving database availability
CN112671928A (en) * 2020-12-31 2021-04-16 北京天融信网络安全技术有限公司 Equipment centralized management architecture, load balancing method, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7155515B1 (en) * 2001-02-06 2006-12-26 Microsoft Corporation Distributed load balancing for single entry-point systems

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3702918A1 (en) * 2007-04-25 2020-09-02 Alibaba Group Holding Limited Method and apparatus for cluster data processing
WO2011140951A1 (en) * 2010-08-25 2011-11-17 华为技术有限公司 Method, device and system for load balancing
CN103092697A (en) * 2010-12-17 2013-05-08 微软公司 Multi-tenant, high-density container service for hosting stateful and stateless middleware components
WO2014054075A1 (en) * 2012-10-04 2014-04-10 Hitachi, Ltd. System management method, and computer system
CN103227754A (en) * 2013-04-16 2013-07-31 浪潮(北京)电子信息产业有限公司 Dynamic load balancing method of high-availability cluster system, and node equipment
US10216770B1 (en) * 2014-10-31 2019-02-26 Amazon Technologies, Inc. Scaling stateful clusters while maintaining access
CN107925876A (en) * 2015-08-14 2018-04-17 瑞典爱立信有限公司 Node and method for the mobile process for handling wireless device
CN106603592A (en) * 2015-10-15 2017-04-26 中国电信股份有限公司 Application cluster migrating method and migrating device based on service model
CN106790692A (en) * 2017-02-20 2017-05-31 郑州云海信息技术有限公司 A kind of load-balancing method and device of many clusters
CN109343963A (en) * 2018-10-30 2019-02-15 杭州数梦工场科技有限公司 A kind of the application access method, apparatus and relevant device of container cluster
KR102112047B1 (en) * 2019-01-29 2020-05-18 주식회사 리얼타임테크 Method for adding node in hybride p2p type cluster system
CN110727709A (en) * 2019-10-10 2020-01-24 北京优炫软件股份有限公司 Cluster database system
CN110798517A (en) * 2019-10-22 2020-02-14 雅马哈发动机(厦门)信息系统有限公司 Decentralized cluster load balancing method and system, mobile terminal and storage medium
CN112015544A (en) * 2020-06-30 2020-12-01 苏州浪潮智能科技有限公司 Load balancing method, device and equipment of k8s cluster and storage medium
CN112492022A (en) * 2020-11-25 2021-03-12 上海中通吉网络技术有限公司 Cluster, method, system and storage medium for improving database availability
CN112445623A (en) * 2020-12-14 2021-03-05 招商局金融科技有限公司 Multi-cluster management method and device, electronic equipment and storage medium
CN112671928A (en) * 2020-12-31 2021-04-16 北京天融信网络安全技术有限公司 Equipment centralized management architecture, load balancing method, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113326100A (en) 2021-08-31

Similar Documents

Publication Publication Date Title
US11765110B2 (en) Method and system for providing resiliency in interaction servicing across data centers
EP2347563B1 (en) Distributed master election
US9426116B1 (en) Multiple-master DNS system
US9262229B2 (en) System and method for supporting service level quorum in a data grid cluster
US8375001B2 (en) Master monitoring mechanism for a geographical distributed database
EP2616966B1 (en) System and method for connecting an application server with a clustered database
CN100549960C (en) The troop method and system of the quick application notification that changes in the computing system
CN113326100B (en) Cluster management method, device, equipment and computer storage medium
CN110140119A (en) System and method for managing cache server cluster
CN110224871A (en) A kind of high availability method and device of Redis cluster
CN103888277B (en) A kind of gateway disaster-tolerant backup method, device and system
US20190075084A1 (en) Distributed Lock Management Method, Apparatus, and System
CN102868754A (en) High-availability method, node device and system for achieving cluster storage
CN109802986B (en) Equipment management method, system, device and server
CN107666493B (en) Database configuration method and equipment thereof
JP6615761B2 (en) System and method for supporting asynchronous calls in a distributed data grid
CN107682411A (en) A kind of extensive SDN controllers cluster and network system
EP3648405B1 (en) System and method to create a highly available quorum for clustered solutions
CN114900526B (en) Load balancing method and system, computer storage medium and electronic equipment
KR101883671B1 (en) Method and management server for dtitributing node
CN113055461B (en) ZooKeeper-based unmanned cluster distributed cooperative command control method
EP3435615B1 (en) Network service implementation method, service controller, and communication system
US20240028611A1 (en) Granular Replica Healing for Distributed Databases
CN113301086A (en) DNS data management system and management method
WO2023273483A1 (en) Data processing system and method, and switch

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant