CN112433842B - Method and equipment for distributing master node and slave node in service cluster - Google Patents
Method and equipment for distributing master node and slave node in service cluster Download PDFInfo
- Publication number
- CN112433842B CN112433842B CN202010296855.9A CN202010296855A CN112433842B CN 112433842 B CN112433842 B CN 112433842B CN 202010296855 A CN202010296855 A CN 202010296855A CN 112433842 B CN112433842 B CN 112433842B
- Authority
- CN
- China
- Prior art keywords
- master
- nodes
- slave
- node
- slave node
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5083—Techniques for rebalancing the load in a distributed system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5072—Grid computing
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Computer And Data Communications (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The application provides a method and equipment for distributing master-slave nodes in a service cluster, which can distribute master-slave node pairs consisting of master nodes and slave nodes in nodes to be distributed in the service cluster to a first available server, distribute the master-slave node pairs corresponding to the master-slave node pairs to a second available server with the minimum node pair connection number between the master-slave node pairs and the first available server, and repeat the node distribution process until all the nodes in the nodes to be distributed are completely distributed, thereby avoiding server pressure distribution imbalance caused by random deployment of the service cluster, leading the master nodes and the slave nodes in the service cluster to be uniformly distributed on a plurality of servers, not leading to single-point concentration problem of a plurality of master nodes on a small number of servers, and in addition, under the condition that the master nodes are crashed and the slave nodes bear pressure, leading to the single-point concentration of the servers because the uniform distribution of the slave nodes, thereby avoiding the pressure avalanche problem caused by the continuous collapse of the server by pressure.
Description
Technical Field
The present application relates to the field of cluster computing, and in particular, to a method and an apparatus for allocating master and slave nodes in a service cluster.
Background
In the deployment of the existing service cluster nodes, master-slave mutual backup between the nodes is a very common technology and is widely applied to service clusters such as cache service, database service, application service and the like. In addition, when the nodes in the service cluster are deployed to the physical server, a completely random mode is usually adopted, that is, each master node or each slave node is independently and randomly distributed to the available physical server, so that the server resources can be utilized as much as possible, meanwhile, the random distribution only needs to meet the condition that any pair of master nodes and slave nodes are not on the same physical machine, and the implementation mode is simple. However, the inventor has found that the distribution mode in the prior art is easy to cause the problems of single point concentration and pressure avalanche.
The single-point concentration means that all the master nodes in a service cluster are distributed on some physical servers, and the master nodes need to provide services to the outside, so that the physical servers where the master nodes are located are over-stressed. The pressure avalanche means that a physical server where a master node is located in a service cluster collapses due to overlarge pressure, a slave node corresponding to the master node takes over the master node to provide services to the outside, and the distribution of the slave nodes also has a single-point concentration problem, so that the physical server where the slave node is located also collapses due to the overlarge pressure, the physical server is continuously crushed, and finally all the physical servers collapse.
Disclosure of Invention
An object of the present application is to provide a method and an apparatus for allocating master and slave nodes in a service cluster, so as to solve the problem in the prior art that single-point concentration and pressure avalanche are easily caused by service cluster deployment.
In order to achieve the above object, the present application provides a method for allocating master and slave nodes in a service cluster, where the method includes:
forming a first master-slave node pair by a first master node and a second slave node in nodes to be distributed of a service cluster, and distributing the first master-slave node pair to a first available server capable of accommodating the most master-slave node pairs at present;
forming a second master-slave node pair by a second master node corresponding to the second slave node and a first slave node corresponding to the first master node in the nodes to be allocated, and allocating the second master-slave node pair to a second available server with the minimum node pair connection number between the second master-slave node pair and the first available server;
and continuing to form other master-slave node pairs by other master nodes and slave nodes in the nodes to be distributed and distributing the master nodes and the slave nodes to the available servers until the master nodes and the slave nodes in the nodes to be distributed are completely distributed.
Further, before a first master node and a second slave node in nodes to be allocated of the service cluster form a first master-slave node pair and the first master-slave node pair is allocated to a first available server which can currently accommodate the maximum master-slave node pairs, the method further includes:
determining computing resources required by a master node and slave nodes in the nodes to be distributed;
acquiring currently available computing resources of the candidate server;
and if the currently available computing resources of the candidate server can meet the computing resources required by the master node and the slave nodes in the nodes to be distributed, determining the candidate server as an available server.
Further, determining the computing resources required by the master node and the slave nodes in the nodes to be allocated comprises:
and determining the computing resources required by the master node and the slave nodes in the nodes to be distributed according to the type of the service cluster.
Further, the computing resources include a combination of one or more of: CPU, memory size, disk space, or port number.
Further, assigning the first master-slave node pair to a first available server that currently can accommodate the most master-slave node pairs, comprising:
determining the number of the available servers capable of accommodating the master-slave node pairs according to the current available computing resources of the available servers and the computing resources required by the master-slave node pairs;
determining the available server that can accommodate the largest number of master-slave node pairs as a first available server;
assigning the first master-slave pair of nodes to the first available server.
Further, assigning the second master-slave node pair to a second available server having a minimum number of node-pair connections with the first available server, comprising:
acquiring the number of node pair connections between the first available server and the alternative available server;
determining the candidate available server corresponding to the minimum value of the node pair connection number as a second available server;
assigning the second master-slave pair of nodes to the second available server.
Further, obtaining the node-to-node connection number between the first available server and the alternative available server includes:
and determining the number of node pair connections between the first available server and the alternative available server according to the corresponding number of master-slave node pairs in the first available server and master-slave node pairs in the alternative available server.
Further, after obtaining the number of node-to-node connections between the first available server and the alternative available server, the method further includes:
and sorting the alternative available servers according to the number of the node-to-node connections between the first available server and the alternative available servers.
Based on another aspect of the present application, there is also provided an allocation apparatus for master and slave nodes in a service cluster, where the apparatus includes:
the master-slave node pair forming device is used for forming a first master-slave node pair by a first master node and a second slave node in nodes to be distributed of the service cluster, forming a second master-slave node pair by a second master node corresponding to the second slave node in the nodes to be distributed and a first slave node corresponding to the first master node, and continuously forming other master-slave node pairs by other master nodes and slave nodes in the nodes to be distributed;
and the master-slave node pair distribution device is used for distributing the first master-slave node pair to a first available server which can currently hold the most master-slave node pairs, distributing the second master-slave node pair to a second available server with the minimum node pair connection number between the second available server and the first available server, and distributing other master-slave node pairs to the available servers until the master node and the slave node in the nodes to be distributed are completely distributed.
The present application also provides a computer readable medium, on which computer readable instructions are stored, the computer readable instructions being executable by a processor to implement the foregoing allocation method of master and slave nodes in a service cluster.
Compared with the prior art, the scheme provided by the application can enable a first master node and a second slave node in nodes to be distributed in a service cluster to form a first master-slave node pair, allocate the first master-slave node pair to a first available server capable of accommodating the most master-slave node pairs at present, further enable a second master node corresponding to the second slave node and a first slave node corresponding to the first master node to form a second master-slave node pair, allocate the second master-slave node pair to a second available server with the minimum node pair connection number between the second master-slave node pair and the first available server, and then continue to enable the remaining master nodes and slave nodes in the nodes to be distributed to form master-slave node pairs and allocate the master-slave node pairs to the available servers until all master nodes and slave nodes in the nodes to be distributed are completely distributed, so that server pressure distribution imbalance caused by random deployment of the service cluster is avoided, in addition, under the condition that the master nodes collapse and the slave nodes bear pressure, the single-point concentration of the servers cannot be caused due to the uniform distribution of the slave nodes, and the pressure avalanche problem caused by continuous pressure collapse of the servers is further avoided.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
fig. 1 is a flowchart of a method for allocating master and slave nodes in a service cluster according to some embodiments of the present application;
fig. 2 is a schematic structural diagram of a distribution device of a master node and a slave node in a service cluster according to some embodiments of the present application;
fig. 3 is a schematic node distribution diagram after nodes to be allocated are pre-allocated to available servers according to some preferred embodiments of the present application;
FIG. 4 is a schematic diagram of a complete Bundle master-slave relationship and available server distribution provided by some preferred embodiments of the present application;
fig. 5 is a schematic distribution diagram of nodes to be allocated on available servers after the nodes are allocated according to some preferred embodiments of the present application;
FIG. 6 is a schematic diagram illustrating a distribution of nodes when an available server is down according to some preferred embodiments of the present application;
fig. 7 is a schematic distribution diagram of nodes after an available server is down and a master node is automatically selected according to some preferred embodiments of the present application;
FIG. 8 is a flow diagram of a method for assigning a first master-slave node pair to a first available server that currently can accommodate the most master-slave node pairs provided by some embodiments of the present application;
fig. 9 is a flowchart of a method for assigning a second master-slave pair to a second available server with a minimum number of node-pair connections to the first available server according to some embodiments of the present application.
Detailed Description
The present application is described in further detail below with reference to the attached figures.
In a typical configuration of the present application, the terminal and the network device each include one or more processors (CPUs), input/output interfaces, network interfaces, and memories.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
Fig. 1 illustrates a method for allocating master and slave nodes in a service cluster according to some embodiments of the present application, where the method may specifically include the following steps:
step S101, a first master node and a second slave node in nodes to be distributed of a service cluster form a first master-slave node pair, and the first master-slave node pair is distributed to a first available server capable of accommodating the most master-slave node pairs at present;
step S102, a second master-slave node pair is formed by a second master node corresponding to the second slave node and a first slave node corresponding to the first master node in the nodes to be distributed, and the second master-slave node pair is distributed to a second available server with the minimum node pair connection number between the second master-slave node pair and the first available server;
and step S103, continuing to form other master-slave node pairs by other master nodes and slave nodes in the nodes to be distributed and distributing the master-slave node pairs to available servers until the master nodes and the slave nodes in the nodes to be distributed are completely distributed.
The scheme is particularly suitable for a scene of distributing the master nodes and the slave nodes in the service cluster to a plurality of physical servers, the master nodes and the slave nodes in the nodes to be distributed in the service cluster can form first master-slave node pairs, the first master-slave node pairs are distributed to the first available server which can accommodate the master-slave node pairs most at present, and the second master-slave node pairs corresponding to the first master-slave node pairs are distributed to the second available server which is connected with the first available server least, so that the master nodes and the slave nodes in the service cluster can be uniformly distributed on the servers, and the problems of single point concentration and pressure avalanche are avoided.
Here, a service cluster refers to a virtual server cluster providing a service externally, and is generally a complete set of service nodes formed by service programs. The service cluster comprises a plurality of main nodes and a plurality of corresponding slave nodes, wherein one main node can correspond to one slave node and also can correspond to a plurality of slave nodes, the main node is used for providing services such as cache service, database service and application service for other devices outside the service cluster, the slave nodes are used as backups of the main node, and the slave nodes take over the main node to provide the services after the main node cannot provide the services.
In step S101, a first master node and a second slave node in nodes to be allocated in a service cluster are configured to form a first master-slave node pair, and the first master-slave node pair is allocated to a first available server that can currently accommodate the maximum master-slave node pairs. All nodes of a service cluster are in a to-be-allocated state during initial construction, the to-be-allocated nodes comprise all master nodes and slave nodes waiting to be allocated, and after all the nodes in the to-be-allocated nodes are allocated to an actual physical server, the master nodes and the slave nodes can start to provide services.
Here, the first master node is one of the master nodes to be allocated, the second slave node is one of the slave nodes to be allocated, and the second slave node and the first master node are not corresponding master-slave nodes. In some embodiments of the present application, a first master-slave node pair composed of a first master node and a second slave node is allocated as a unified unit, so that the first master node and the second slave node can be guaranteed to be allocated to a server at the same time.
In some embodiments of the present application, before a first master-slave node pair is formed by a first master node and a second slave node in nodes to be allocated of a service cluster and the first master-slave node pair is allocated to a first available server that can currently accommodate the largest number of master-slave node pairs, a process of determining a server of a data center as an available server may further be included, which specifically includes the following steps:
1) determining computing resources required by a master node and slave nodes in the nodes to be distributed;
2) acquiring currently available computing resources of the candidate server;
3) and if the currently available computing resources of the candidate server can meet the computing resources required by the master node and the slave nodes in the nodes to be distributed, determining the candidate server as an available server.
Specifically, the computing resources required for determining the master node and the slave node in the nodes to be allocated may be determined according to the type of the service cluster. In a service cluster, the computing resources required by the master and slave nodes are typically the same. Herein, the types of service clusters may include, but are not limited to, cache services, database services, proxy services, application services, or the like, and the types may be different, wherein the computing resources required by the nodes may be different accordingly. For example, if the type of the service cluster is cache, the computing resources required by the master node or the slave node may be 2 CPUs and 4G memories; if the type of the service cluster is a database, the computing resources required by the master node or the slave node can be 4 CPUs (central processing units), 2G memory and 200G disk space; if the service cluster is a proxy service, the computing resources required by the master node or the slave node may be 20 CPUs, 1G memory, 2Gbps network bandwidth, and the like.
In some embodiments of the present application, the computing resources may include one or more of any combination of the following: CPU, memory size, disk space or port number, etc.
Preferably, the currently available computing resources of the candidate servers are obtained, all servers of the data center can be used as the candidate servers, the initial computing resources and the allocated computing resources of each candidate server are obtained, and the remaining computing resources obtained by subtracting the allocated computing resources from the initial computing resources are used as the currently available computing resources.
As shown in fig. 8, in some embodiments of the present application, assigning a first master-slave node pair to a first available server that can currently accommodate the most master-slave node pairs may specifically include the following steps:
step S201, determining the number of the available servers capable of accommodating the master-slave node pairs according to the current available computing resources of the available servers and the computing resources required by the master-slave node pairs; here, the current available computing resources may be divided by the computing resources required by the master-slave node pairs to obtain the maximum number of the available servers that accommodate the master-slave node pairs;
step S202, determining the available server capable of accommodating the master-slave node pair with the largest number as a first available server;
step S203, assign the first master-slave node pair to the first available server.
In some preferred embodiments of the present application, the nodes to be allocated of the service cluster may also be pre-allocated to the available servers, and specifically, the following method may be used: calculating the maximum node number which can be contained by all the available servers, performing descending order on the available servers according to the maximum node number to obtain an available server list Counts, marking the available servers in the Counts by using breadth-first search,once marked, the number of nodes in Counts is reduced by 2 (i.e. one master node and one slave node, called half bundle), and hallbundle is marked (i.e. the number of half bundles on all eligible available servers is sorted in descending order) until all nodes to be allocated are pre-allocated. Preferably, the breadth-first search may employ the following procedure: an array of HalfBundles representations is set [ Host1..Hostn]Initializing each element in the array to 0, traversing Host1To HostnAnd for each HostnInspection 2 (HalfBundles Hostn]) Whether +2 is less than Counts Hostn](namely judging whether the node is in the maximum accommodating range of the nodes of the available server), and if the node is in the maximum accommodating range, then HalfBundles [ Host ]n]And (2) 1 (namely, increasing the number of master-slave node pairs on the available server), and then calculating total (half nodes), wherein if the total is greater than the number of all nodes to be distributed, the pre-distribution is completed. In a preferred embodiment of the present application, 12 nodes to be allocated are pre-allocated to 4 available servers, and then the final distribution is shown in fig. 3, where the nodes to be allocated include 6 master nodes and 6 slave nodes, the master node includes 6 nodes from the master node a to the master node F, and the slave nodes include 6 nodes from the slave node a to the slave node F. The 4 available servers are physical machine 1, physical machine 2, physical machine 3 and physical machine 4.
In step S102, a second master-slave node pair is formed by a second master node corresponding to the second slave node and a first slave node corresponding to the first master node in the nodes to be allocated, and the second master-slave node pair is allocated to a second available server with the smallest number of node pair connections with the first available server. Here, the second master node in the second master-slave node pair is the master node to which the second slave node corresponds, and the first slave node is the slave node to which the first master node corresponds, so that the second master-slave node pair corresponds to the first master-slave node pair.
As shown in fig. 9, in some embodiments of the present application, the allocating the second master-slave node pair to the second available server with the smallest number of node pair connections to the first available server may specifically include the following steps:
step S301, acquiring the node pair connection number between the first available server and the alternative available server;
step S302, determining the alternative available server corresponding to the minimum value of the node pair connection number as a second available server;
step S303, assign the second master-slave node pair to the second available server. The above steps provide a preferred method of assigning a second master-slave pair of nodes to a second available server.
Preferably, the node pair connection number between the first available server and the alternative available server is obtained, and specifically, the node pair connection number between the first available server and the alternative available server may be determined according to the corresponding number of the master-slave node pairs in the first available server and the master-slave node pairs in the alternative available server.
Here, the node pair connection number between the available servers refers to the number of corresponding master-slave node pairs, for example, if one master-slave node pair 1 is deployed on the available server a, and a master-slave node pair 2 corresponding to the master-slave node pair 1 is deployed on the available server B, the node pair connection number between the available server a and the available server B is 1.
In some embodiments of the present application, after obtaining the number of node-pair connections between the first available server and the alternative available server, the alternative available servers may be further sorted according to the number of node-pair connections between the first available server and the alternative available server. Here, the sorting of the alternative available servers may be arranged according to an ascending order or a descending order of the number of connections of the nodes corresponding thereto. By sequencing the alternative available servers, the alternative available server corresponding to the minimum value of the node pair connection number can be conveniently and quickly found.
In some preferred embodiments of the present application, the second master-slave node pair is allocated to the second available server with the smallest number of node pair connections to the first available server, and specifically the following method may be used: firstly, finding an available server corresponding to the maximum value in the current HalfBundles array, and recording the available server as i; finding the available server with the smallest value except for the server, which is marked as j, in the ith row of the connection matrix S, wherein the connection matrix S [ i ] [ j ] ═ sum (link { Hosts [ i ], Hosts [ j }), wherein 0 ═ i < len (halfbonds), 0 ═ j < len (halfbonds), and len (halfbonds) are used for calculating the length of a halfbonds array, and each element in the connection matrix S represents the number of connections between Hosts [ i ] and Hosts [ j ]; respectively adding 1 to S [ i ] [ j ] and S [ j ] [ i ], simultaneously adding a record Link { Hosts [ i ], Hosts [ j ] }toa connection table Links, and then respectively subtracting 1 from HalfBundles [ i ] and HalfBundles [ j ], so that Hosts [ i ] and Hosts [ j ] can form a complete Bundle (comprising two semi-bundles, namely two corresponding master-slave node pairs, two master nodes and two slave nodes, which are respectively master-slave on two available servers); the above steps are repeated until sum (halfbundles) is 0, i.e., both master and slave node pairs are allocated to the available servers.
Here, a plurality of complete bundles can be obtained from the Links table, and a Bundle includes 4 nodes, which are represented by the records Link in the Links table, for example, Link { Hosts [ i ], Hosts [ j ] } can be converted into bundles { Hosts [ i ]: [ master, slave ], Hosts [ j ]: [ master, slave ] }, all converted bundles can form a Bundle list. A complete Bundle is shown in fig. 4, which shows the master-slave relationship and the distribution of available servers, and the assignment of the master-slave node pairs can be completed by traversing the Bundle list. Bundle { Hosts [ i ]: [ master, slave ], Hosts [ j ]: [ master, slave ] } can be specifically represented in FIG. 4 as: one master node a and one slave node B are allocated to the physical machine 1, and a slave node a corresponding to the master node a and a master node B corresponding to the slave node B are allocated to the physical machine 2.
In step S103, other master-slave node pairs are continuously formed by other master nodes and slave nodes in the nodes to be allocated and allocated to the available servers until the master nodes and slave nodes in the nodes to be allocated are completely allocated. The two steps are repeated to redistribute the two master-slave node pairs, namely the two master nodes and the two slave nodes, until all the master nodes and the slave nodes in the nodes to be distributed are distributed. In some preferred embodiments of the present application, a final node distribution state after all the master nodes and slave nodes in the nodes to be allocated in the service cluster are allocated is shown in fig. 5, where the physical machine 1, the physical machine 2, the physical machine 3, and the physical machine 4 are 4 available servers, two master nodes, a master node a and a master node D, are allocated on the available server of the physical machine 1, two slave nodes, a slave node B and a slave node C, are allocated on the available server of the physical machine 2, two master nodes, a master node C and a master node E, are allocated on the available server of the physical machine 3, two slave nodes, a slave node D and a slave node F, and a master node F and a slave node E are allocated on the available server of the physical machine 4.
By the scheme in the embodiment, the pressure of any service cluster can be completely dispersed to all physical servers, namely all the main nodes are uniformly distributed to the currently available servers, so that the problem of single point concentration during initial distribution of the service cluster is solved; meanwhile, all the slave nodes are uniformly distributed on the current available server, so that the problem that the master nodes are gradually concentrated on several physical servers due to the gradual operation of the physical servers such as gradual failure, elimination, upgrading, restarting and the like after long-time operation is solved, and pressure avalanche is avoided; the number of the main nodes and the number of the slave nodes on each available server are approximately the same, so that the waste of server resources caused by the fact that the main nodes are distributed on half of the available servers and the slave nodes are distributed on the other half of the available servers is avoided.
Fig. 6 is a schematic diagram illustrating a distribution of nodes when an available server is down according to some preferred embodiments of the present application, where, due to an unexpected situation, the physical machine 1 of the available server in the diagram cannot continue to provide services, and a master node running on the physical machine 1: the slave nodes corresponding to the master node a and the master node D can be automatically promoted to provide services for the master node, and actually, the master node a and the master node D are migrated from the physical machine 1 of the fault server to other available servers which can normally provide services, fig. 7 shows the distribution of nodes after the physical machine 1 is down and the available servers are automatically selected, as can be seen from the figure, the slave nodes a distributed on the physical machine 2 of the available server are automatically promoted to be the master nodes a, the slave nodes D distributed on the physical machine 3 of the available server are automatically promoted to be the master nodes D, the two master nodes can continue to provide original services, and the pressure of the physical machine 1 is uniformly distributed on the physical machine 2 and the physical machine 3, so that single-point concentration and pressure avalanche are avoided.
Based on the same inventive concept, in some embodiments of the present application, an allocation device for a master node and a slave node in a service cluster is further provided, and since a method corresponding to the device is a corresponding method in the foregoing embodiments and is similar to a principle of solving a problem of the method, implementation of the device may refer to implementation of the corresponding method, and repeated details are not repeated.
Fig. 2 shows an allocation apparatus for master-slave nodes in a service cluster, where the apparatus 1 includes a master-slave node pair composition device 11 and a master-slave node pair allocation device 12, specifically, the master-slave node pair composition device 11 is configured to compose a first master-slave node pair from a first master node and a second slave node in nodes to be allocated in the service cluster, compose a second master-slave node pair from a second master node corresponding to the second slave node in the nodes to be allocated and a first slave node corresponding to the first master node, and continue to compose other master-slave node pairs from other master nodes and slave nodes in the nodes to be allocated; and a master-slave node pair allocation device 12, configured to allocate the first master-slave node pair to a first available server that can currently accommodate the largest master-slave node pair, allocate the second master-slave node pair to a second available server that has the smallest node pair connection number with the first available server, and allocate the other master-slave node pairs to the available servers until the master node and the slave node in the nodes to be allocated are completely allocated.
Further, the apparatus 1 further includes an available server determining device (not shown) configured to determine the computing resources required by the master node and the slave node in the nodes to be allocated, obtain the currently available computing resources of the candidate server, and determine the candidate server as an available server if the currently available computing resources of the candidate server can meet the computing resources required by the master node and the slave node in the nodes to be allocated.
Further, the available server determining device is configured to determine, according to the type of the service cluster, the computing resources required by the master node and the slave node in the nodes to be allocated.
Further, the computing resources include a combination of one or more of: CPU, memory size, disk space, or port number.
Further, the master-slave node pair allocating device 12 is configured to determine, according to the currently available computing resources of the available servers and the computing resources required by the master-slave node pairs, the number of the available servers that can accommodate the master-slave node pairs corresponding to the available servers, determine the available server that can accommodate the largest number of the master-slave node pairs as a first available server, and allocate the first master-slave node pair to the first available server.
Further, the master-slave node pair allocating device 12 is configured to obtain a node pair connection number between the first available server and another available server, determine the available server corresponding to the minimum value of the node pair connection number as a second available server, and allocate the second master-slave node pair to the second available server.
Further, the master-slave node pair allocation device 12 is configured to determine the number of node pair connections between the first available server and the other available servers according to the corresponding number of master-slave node pairs in the first available server and master-slave node pairs in the other available servers.
Some embodiments of the present application also provide a computer readable medium, on which computer readable instructions are stored, the computer readable instructions being executable by a processor to implement the foregoing allocation method of master and slave nodes in a service cluster.
In summary, the solution provided by the present application can make a first master-slave node pair and a second slave node in nodes to be allocated in a service cluster, and allocate the first master-slave node pair to a first available server that can currently accommodate the most master-slave node pairs, and further make a second master-slave node pair and a first slave node corresponding to a second master node and a first slave node corresponding to a first master node corresponding to a second slave node, and allocate the second master-slave node pair to a second available server with the smallest node pair connection number between the second master-slave node pair and the first available server, and then continue to make master-slave node pairs and slave nodes in nodes to be allocated and allocate to the available servers until all master nodes and slave nodes in the nodes to be allocated are allocated, thereby avoiding server pressure allocation imbalance caused by random deployment of the service cluster, and uniformly distributing the master nodes and slave nodes in the service cluster over a plurality of servers, the problem of single-point concentration of a plurality of main nodes distributed on a small number of servers is avoided, and in addition, under the condition that the main nodes collapse and the slave nodes bear pressure, the single-point concentration of the servers is avoided due to the uniform distribution of the slave nodes, so that the problem of pressure avalanche caused by continuous pressure collapse of the servers is avoided.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, implemented using Application Specific Integrated Circuits (ASICs), general purpose computers or any other similar hardware devices. In one embodiment, the software programs of the present application may be executed by a processor to implement the steps or functions described above. Likewise, the software programs (including associated data structures) of the present application may be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Additionally, some of the steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
In addition, some of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application through the operation of the computer. Program instructions which invoke the methods of the present application may be stored on a fixed or removable recording medium and/or transmitted via a data stream on a broadcast or other signal-bearing medium and/or stored within a working memory of a computer device operating in accordance with the program instructions. An embodiment according to the present application comprises a device comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the device to perform a method and/or a solution according to the aforementioned embodiments of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Claims (10)
1. A method for distributing master and slave nodes in a service cluster, wherein the method comprises the following steps:
forming a first master-slave node pair by a first master node and a second slave node in nodes to be distributed of a service cluster, and distributing the first master-slave node pair to a first available server capable of accommodating the most master-slave node pairs at present;
forming a second master-slave node pair by a second master node corresponding to the second slave node and a first slave node corresponding to the first master node in the nodes to be allocated, and allocating the second master-slave node pair to a second available server with the minimum node pair connection number between the second master-slave node pair and the first available server;
and continuing to form other master-slave node pairs by other master nodes and slave nodes in the nodes to be distributed and distributing the master nodes and the slave nodes to the available servers until the master nodes and the slave nodes in the nodes to be distributed are completely distributed.
2. The method of claim 1, wherein before forming a first master-slave node pair from a first master node and a second slave node in the nodes to be allocated of the service cluster and allocating the first master-slave node pair to a first available server that currently can accommodate the most master-slave node pairs, further comprising:
determining computing resources required by a master node and slave nodes in the nodes to be distributed;
acquiring currently available computing resources of the candidate server;
and if the currently available computing resources of the candidate server can meet the computing resources required by the master node and the slave nodes in the nodes to be distributed, determining the candidate server as an available server.
3. The method of claim 2, wherein determining the computing resources required by the master node and the slave nodes in the nodes to be allocated comprises:
and determining the computing resources required by the master node and the slave nodes in the nodes to be distributed according to the type of the service cluster.
4. The method of claim 2 or 3, wherein the computing resources comprise a combination of one or more of: CPU, memory size, disk space, or port number.
5. The method of claim 2, wherein assigning the first master-slave node pair to a first available server that currently can accommodate the most master-slave node pairs comprises:
determining the number of the available servers capable of accommodating the master-slave node pairs according to the current available computing resources of the available servers and the computing resources required by the master-slave node pairs;
determining the available server that can accommodate the largest number of master-slave node pairs as a first available server;
assigning the first master-slave pair of nodes to the first available server.
6. The method of claim 1, wherein assigning the second master-slave pair to a second available server with a minimum number of node-pair connections to the first available server comprises:
acquiring the number of node pair connections between the first available server and the alternative available server;
determining the candidate available server corresponding to the minimum value of the node pair connection number as a second available server;
assigning the second master-slave pair of nodes to the second available server.
7. The method of claim 6, wherein obtaining a node-to-node connection number between the first available server and an alternative available server comprises:
and determining the number of node pair connections between the first available server and the alternative available server according to the corresponding number of master-slave node pairs in the first available server and master-slave node pairs in the alternative available server.
8. The method of claim 6, wherein after obtaining the number of node-to-node connections between the first available server and the alternative available server, further comprising:
and sorting the alternative available servers according to the number of the node-to-node connections between the first available server and the alternative available servers.
9. An apparatus for distributing master and slave nodes in a service cluster, wherein the apparatus comprises:
the master-slave node pair forming device is used for forming a first master-slave node pair by a first master node and a second slave node in nodes to be distributed of the service cluster, forming a second master-slave node pair by a second master node corresponding to the second slave node in the nodes to be distributed and a first slave node corresponding to the first master node, and continuously forming other master-slave node pairs by other master nodes and slave nodes in the nodes to be distributed;
and the master-slave node pair distribution device is used for distributing the first master-slave node pair to a first available server which can currently hold the most master-slave node pairs, distributing the second master-slave node pair to a second available server with the minimum node pair connection number between the second available server and the first available server, and distributing other master-slave node pairs to the available servers until the master node and the slave node in the nodes to be distributed are completely distributed.
10. A computer readable medium having computer readable instructions stored thereon which are executable by a processor to implement the method of any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010296855.9A CN112433842B (en) | 2020-04-15 | 2020-04-15 | Method and equipment for distributing master node and slave node in service cluster |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010296855.9A CN112433842B (en) | 2020-04-15 | 2020-04-15 | Method and equipment for distributing master node and slave node in service cluster |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112433842A CN112433842A (en) | 2021-03-02 |
CN112433842B true CN112433842B (en) | 2022-04-19 |
Family
ID=74690263
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010296855.9A Active CN112433842B (en) | 2020-04-15 | 2020-04-15 | Method and equipment for distributing master node and slave node in service cluster |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112433842B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104615500A (en) * | 2015-02-25 | 2015-05-13 | 浪潮电子信息产业股份有限公司 | Method for dynamically distributing computing resources of server |
CN109040184A (en) * | 2018-06-28 | 2018-12-18 | 武汉船舶通信研究所(中国船舶重工集团公司第七二二研究所) | A kind of electoral machinery and server of host node |
CN110471947A (en) * | 2019-07-09 | 2019-11-19 | 广州视源电子科技股份有限公司 | Querying method, server and storage medium based on distributed search engine |
CN110519348A (en) * | 2019-08-15 | 2019-11-29 | 苏州浪潮智能科技有限公司 | A kind of mostly service distributed type assemblies deployment system and method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10223147B2 (en) * | 2016-08-19 | 2019-03-05 | International Business Machines Corporation | Resource allocation in high availability (HA) systems |
-
2020
- 2020-04-15 CN CN202010296855.9A patent/CN112433842B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104615500A (en) * | 2015-02-25 | 2015-05-13 | 浪潮电子信息产业股份有限公司 | Method for dynamically distributing computing resources of server |
CN109040184A (en) * | 2018-06-28 | 2018-12-18 | 武汉船舶通信研究所(中国船舶重工集团公司第七二二研究所) | A kind of electoral machinery and server of host node |
CN110471947A (en) * | 2019-07-09 | 2019-11-19 | 广州视源电子科技股份有限公司 | Querying method, server and storage medium based on distributed search engine |
CN110519348A (en) * | 2019-08-15 | 2019-11-29 | 苏州浪潮智能科技有限公司 | A kind of mostly service distributed type assemblies deployment system and method |
Non-Patent Citations (1)
Title |
---|
一种改进的主从节点选举算法用于实现集群负载均衡;任乐乐等;《中国计量学院学报》;20150915(第03期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112433842A (en) | 2021-03-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5932043B2 (en) | Volatile memory representation of non-volatile storage set | |
WO2019119311A1 (en) | Data storage method, device, and system | |
US8874811B2 (en) | System and method for providing a flexible buffer management interface in a distributed data grid | |
US6928459B1 (en) | Plurality of file systems using weighted allocation to allocate space on one or more storage devices | |
KR102589155B1 (en) | Method and apparatus for memory management | |
US20180167461A1 (en) | Method and apparatus for load balancing | |
JP6388339B2 (en) | Distributed caching and cache analysis | |
US9104501B2 (en) | Preparing parallel tasks to use a synchronization register | |
US8010648B2 (en) | Replica placement in a distributed storage system | |
US11474919B2 (en) | Method for managing multiple disks, electronic device and computer program product | |
US10664392B2 (en) | Method and device for managing storage system | |
US11023141B2 (en) | Resiliency schemes for distributed storage systems | |
WO2017050064A1 (en) | Memory management method and device for shared memory database | |
CN107920101B (en) | File access method, device and system and electronic equipment | |
CN111708738A (en) | Method and system for realizing data inter-access between hdfs of hadoop file system and s3 of object storage | |
CN109271376A (en) | Database upgrade method, apparatus, equipment and storage medium | |
CN112948279A (en) | Method, apparatus and program product for managing access requests in a storage system | |
CN111124264A (en) | Method, apparatus and computer program product for reconstructing data | |
KR101765725B1 (en) | System and Method for connecting dynamic device on mass broadcasting Big Data Parallel Distributed Processing | |
US20190347165A1 (en) | Apparatus and method for recovering distributed file system | |
CN115756955A (en) | Data backup and data recovery method and device and computer equipment | |
CN111399761B (en) | Storage resource allocation method, device and equipment, and storage medium | |
CN106571935B (en) | Resource scheduling method and equipment | |
CN112433842B (en) | Method and equipment for distributing master node and slave node in service cluster | |
CN111506254B (en) | Distributed storage system and management method and device thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |