CN115827745A - Memory database cluster and implementation method and device thereof - Google Patents

Memory database cluster and implementation method and device thereof Download PDF

Info

Publication number
CN115827745A
CN115827745A CN202111088670.XA CN202111088670A CN115827745A CN 115827745 A CN115827745 A CN 115827745A CN 202111088670 A CN202111088670 A CN 202111088670A CN 115827745 A CN115827745 A CN 115827745A
Authority
CN
China
Prior art keywords
memory database
database cluster
cluster
strategy
routing strategy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111088670.XA
Other languages
Chinese (zh)
Inventor
杜伟
赵彤
董俊峰
强群力
刘超千
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NetsUnion Clearing Corp
Original Assignee
NetsUnion Clearing Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NetsUnion Clearing Corp filed Critical NetsUnion Clearing Corp
Priority to CN202111088670.XA priority Critical patent/CN115827745A/en
Publication of CN115827745A publication Critical patent/CN115827745A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a method and a device for realizing a memory database cluster and the memory database cluster, wherein the method comprises the following steps: generating a first routing strategy according to the data key and the attribute of the memory database cluster; generating a second routing strategy according to the data key and the attribute of each memory database node in each memory database cluster; the method comprises the steps of deploying a first routing strategy in a proxy server, deploying a corresponding second routing strategy in each memory database cluster, enabling the proxy server to route a request from an application to a target memory database cluster according to the first routing strategy, enabling the target memory database cluster to route the request to a corresponding memory database node according to the second routing strategy, and accordingly processing the request. According to the method and the device, the proxy server is arranged, so that a plurality of memory database clusters form a super-large cluster, the continuously-increased service requirement can be met, and the processing performance of the memory database is improved.

Description

Memory database cluster and implementation method and device thereof
Technical Field
The application relates to the technical field of databases, in particular to a method and a device for realizing a memory database cluster and the memory database cluster.
Background
A memory database, such as Redis, is an open-source Key-Value-based memory-type database. A memory database Cluster, such as Redis Cluster, is a Redis Cluster implementation provided by Redis authorities. Generally, there is a size limit for the in-memory database Cluster, for example, the upper limit of the Redis instance that one Redis Cluster can contain is 1000 Redis instances (e.g., the aricloud Redis Cluster currently supports a maximum of 256 Redis slices, each slice contains one Master (Master) and two slaves (Slave), and there are 768 total Redis instances). Therefore, with the increasing volume of services, a single Redis Cluster cannot meet the service requirements.
Disclosure of Invention
The embodiment of the application provides a method and a device for realizing a memory database cluster and the memory database cluster.
The embodiment of the application adopts the following technical scheme:
in a first aspect, an embodiment of the present application provides a method for implementing a memory database cluster, where the method is executed by a control server, and includes:
generating a first routing strategy according to the data key and the attribute of the memory database cluster;
generating a second routing strategy of each memory database cluster according to the data key and the attribute of each memory database node in each memory database cluster;
the method comprises the steps of deploying a first routing strategy in a proxy server, deploying a corresponding second routing strategy in each memory database cluster, enabling the proxy server to route a request from an application to a target memory database cluster according to the first routing strategy, enabling the target memory database cluster to route the request to a corresponding memory database node according to the second routing strategy, and accordingly processing the request.
Optionally, the method further includes: and under the condition of meeting the capacity expansion and contraction conditions, adjusting the number of memory database clusters started by the memory database cluster, and/or adjusting the number of memory database nodes of the memory database cluster started by the memory database cluster.
Optionally, the method further includes: acquiring first capacity expansion and reduction index data, wherein the capacity expansion and reduction index data comprise the number of requests received by a proxy server and/or the number of requests received by each memory database cluster within preset historical time;
and if the first capacity expansion and reduction index data trigger a preset first capacity expansion and reduction strategy, determining that a first capacity expansion and reduction condition is met, and adjusting the number of the memory database clusters started by the memory database clusters.
Optionally, in the above method, if the first capacity expansion and reduction index data is greater than the preset upper limit of the first capacity expansion and reduction strategy, the first capacity expansion and reduction index data triggers a preset first capacity expansion strategy;
adjusting the number of memory database clusters enabled by the memory database cluster comprises:
creating a new memory database cluster;
copying the slot to be migrated in each enabled memory database cluster and the corresponding data thereof to a new memory database cluster according to a first preset partition rule;
deleting the slot to be migrated and the corresponding data in each enabled memory database cluster;
and updating the first routing strategy and the second routing strategy according to the first preset partition rule.
Optionally, in the above method, if the first capacity expansion and reduction index data is smaller than a preset lower limit value of the first capacity expansion and reduction strategy, the first capacity expansion and reduction index data triggers a preset first capacity reduction strategy;
adjusting the number of memory database clusters enabled by the memory database cluster comprises:
copying the slot of the memory database cluster to be deactivated and the corresponding data thereof to other enabled memory database clusters according to a first preset partition rule;
deleting the slot in each internal memory database cluster to be deactivated and the corresponding data;
deleting the memory database cluster to be deactivated;
and updating the first routing strategy and the second routing strategy according to the first preset partition rule.
Optionally, the method further includes:
acquiring second capacity expansion and reduction index data, wherein the second capacity expansion and reduction index data comprise the number of requests received by each memory database node of each memory database cluster in preset time;
and if the second capacity expansion and reduction index data trigger a preset second capacity expansion and reduction strategy, determining that a second capacity expansion and reduction condition is met, and adjusting the number of memory database nodes of the memory database cluster started by the memory database cluster.
Optionally, in the above method, if the second capacity expansion and reduction index data is greater than the preset upper limit of the second capacity expansion and reduction strategy, the second capacity expansion and reduction index data triggers a preset second capacity expansion strategy;
adjusting the number of memory database nodes of the memory database cluster enabled by the memory database cluster comprises:
creating a new memory database node;
copying the slot to be migrated and the corresponding data in the memory database node of each memory database cluster to a new memory database node according to a second preset partition rule;
deleting the slot to be migrated and the corresponding data in the memory database node of each memory database cluster;
and updating the first routing strategy and the second routing strategy according to a second preset partition rule.
Optionally, in the above method, if the second scaling index data is smaller than the preset lower limit value of the second scaling strategy, the second scaling index data triggers the preset second scaling strategy;
adjusting the number of memory database nodes of the memory database cluster enabled by the memory database cluster comprises:
copying the slots of the memory database nodes to be deactivated and the corresponding data thereof to other enabled memory database nodes according to a second preset partition rule;
deleting the slot in each memory database node to be deactivated and the corresponding data;
deleting the memory database nodes to be deactivated;
and updating the first routing strategy and the second routing strategy according to a second preset partition rule.
In a second aspect, an embodiment of the present application provides an apparatus for implementing a memory database cluster, where the apparatus is disposed in a control server, and the apparatus includes:
the first generating device is used for generating a first routing strategy according to the data key and the attribute of the memory database cluster;
the second generating device is used for generating a second routing strategy of each memory database cluster according to the data key and the attribute of each memory database node in each memory database cluster;
the deployment device is used for deploying a first routing strategy in the proxy server and deploying a corresponding second routing strategy in each memory database cluster, so that the proxy server can route the request from the application to the target memory database cluster according to the first routing strategy, and the target memory database cluster can route the request to the corresponding memory database node according to the second routing strategy, thereby processing the request.
In a third aspect, a memory database cluster is provided, where the memory database cluster includes a proxy server and a plurality of memory database clusters, the proxy server and the plurality of memory database clusters are respectively connected in communication, and the proxy server can be connected in communication with one or more applications;
a first routing strategy is deployed in the proxy server, and the first routing strategy is generated according to the data key and the attribute of the memory database cluster; the proxy server is used for routing the request from the application to the target memory database cluster according to the first routing strategy;
a second routing strategy is respectively deployed in each memory database cluster, and the second routing strategy is generated according to the data key and the attribute of each memory database node in each memory database cluster;
the memory database cluster is used for routing the request to the corresponding memory database node according to the second routing strategy, so as to process the request.
Optionally, in the memory database cluster, the proxy server includes one or more proxy nodes, and each memory database cluster includes one or more memory database nodes;
each agent node is in communication connection with any one memory database node in each memory database cluster;
a first routing strategy is deployed in each agent node; the agent node is used for routing a request from an application to a memory database node which is in communication connection with the agent node in a target memory database cluster according to a first routing strategy deployed in the agent node;
a second routing strategy is respectively deployed in the memory database nodes which are in communication connection with the agent nodes;
and the memory database node is used for routing the request to the corresponding memory database node according to the second routing strategy deployed in the memory database node so as to process the request.
In a fourth aspect, an embodiment of the present application further provides an electronic device, including: a processor; and a memory arranged to store computer executable instructions that, when executed, cause the processor to perform any of the methods described above.
In a fifth aspect, embodiments of the present application further provide a computer-readable storage medium storing one or more programs that, when executed by an electronic device that includes a plurality of application programs, cause the electronic device to perform any of the methods described above.
The embodiment of the application adopts at least one technical scheme which can achieve the following beneficial effects:
the method comprises the steps that a proxy server is arranged, the proxy server is connected with a plurality of memory database clusters, and logical partitioning is realized by deploying a first routing strategy in the proxy server, so that the memory database clusters form a super-large cluster, and the ever-increasing service requirements can be met; the connection between the proxy server and the memory database nodes can be reused, so that the proxy server does not need to be connected with each memory database node in the target memory database cluster, and only needs to be connected with any one memory database node arranged in the memory database cluster, the external connection number of the memory database nodes is greatly reduced, and the processing performance of the memory database is remarkably improved; in addition, the application can be directly connected to the proxy server, so that the memory database cluster does not need to be accessed through the client, and the difficulty of accessing the database by the application and the development difficulty of the application are reduced.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a diagram illustrating the structure of an in-memory database according to the prior art;
FIG. 2 is a flow chart illustrating a method for implementing a memory database cluster according to an embodiment of the present application;
FIG. 3 illustrates a schematic structural diagram of an in-memory database according to an embodiment of the present application;
FIG. 4 is a flow chart illustrating a method for implementing a memory database cluster according to another embodiment of the present application;
FIG. 5 is a block diagram illustrating an apparatus for implementing a memory database cluster according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings. Redis is one of the best-known in-memory databases, and in the embodiments of the present application, redis is mainly used for illustration, but it should be understood that the embodiments of the present application are not limited to Redis, and are also applicable to other in-memory databases.
Fig. 1 shows a schematic structural diagram of a memory database according to the prior art, and as can be seen from fig. 1, in a memory database Cluster (Redis Cluster), a plurality of memory database nodes are included, where the memory database nodes are also called memory database shards (Redis shards), and the memory database shards are to disperse data into a plurality of Redis instance groups according to Key dimensions so as to achieve the effect of improving performance bottleneck and availability, each group of Redis instances is a Redis shard, taking a Master-Slave mode as an example, each Redis shard includes a Master (Master) Slave (Slave) server, that is, each group of Redis instances includes three Redis instances.
In the prior art, when accessing the Redis Cluster, the application directly accesses the Redis Cluster through the Redis smart client, and the application needs to be in communication connection with each in-memory database node of the Redis Cluster, that is, each Redis segment. On one hand, an example which can be accommodated by one Redis Cluster has an upper limit, usually the limit value is 1000 examples, and with the increase of the existing business scale, a single Redis Cluster cannot meet the requirement of future business expansion; on the other hand, in the prior art, the application has too many connections with the Redis instance, which obviously reduces the performance of the Redis instance.
Different from the prior art, the application has the idea that a proxy layer, namely a proxy server, is arranged between the application and the Redis Cluster, when the application needs to access the Redis Cluster, the application does not need to be in direct communication connection with the Redis Cluster, but can access the Redis Cluster through the proxy server, and the request from the application is distributed to each Redis Cluster through the proxy server to be processed. The proxy server can be connected with a plurality of Redis Clusters, so that the Redis Clusters form a super-large memory database Cluster to meet the increasing business requirements.
Fig. 2 is a schematic flowchart illustrating a method for implementing a memory database cluster according to an embodiment of the present application, and as can be seen from fig. 2, the present application at least includes steps S210 to S230:
step S210: and generating a first routing strategy according to the data key and the attribute of the memory database cluster.
The first routing strategy is used for being deployed in the proxy server, and the applied request can be distributed to a target memory database cluster corresponding to the request according to the first routing strategy.
In some embodiments, the attribute of the memory database Cluster may be a number of the memory database Cluster, the first routing policy is a rule for mapping a Key to which Redis Cluster, which Redis Cluster provides a service for the Key, and which Redis Cluster is responsible for which Key, which is determined by specific content of the first routing policy. As mentioned above, the Redis fragmentation is to distribute data to the Redis instance group, and in a scenario with multiple Redis clusteriers, it is not efficient to rely on only the Redis fragmentation for data routing. The method solves the problem by logically partitioning the Redis Cluster at a proxy layer, and particularly, the method connects a plurality of Redis Clusters through the proxy server, deploys a first routing strategy in the proxy server, and can realize that one Key is routed to which Redis Cluster based on the first routing strategy, thereby realizing the logical partitioning of the Redis Clusters.
The first routing policy can be automatically generated during initialization, such as by adopting the existing redis-tri. Rb script; or may be specified by the user, that is, developed by the user, for example, based on the principle of average distribution, and the like, and the present application is not limited thereto.
Step S220: and generating a second routing strategy of each memory database cluster according to the data key and the attribute of each memory database node in each memory database cluster.
And the second routing strategy is deployed in each Redis Cluster and is deployed according to the second routing strategy. Each Redis Cluster can send the application request received by the Redis Cluster to a target memory database node in the Redis Cluster.
The second routing policy is generated by data keys and attributes of memory database nodes (Redis nodes) in memory database clusters, for example, a Redis Cluster includes 16384 slots with numbers of 0, 1, 2, 3 \8230 \ 8230, 16382, 16383, it should be noted that the slot is a virtual slot and does not really exist. In normal operation, the Master of each Redis node in the Redis Cluster is responsible for a part of the slots, for example, a Redis Cluster has 3 Redis nodes, the Master nodes of the 3 Redis nodes are Master 1, master 2 and Master 3 respectively, master 1 is responsible for the slots with the number of 0-4999, master 2 is responsible for the slots with the number of 5000-9999, and Master 3 is responsible for the slots with the number of 10000-16383.
When a certain Key is mapped to a slot which is responsible for a certain Master, the Master is responsible for providing service for the Key, and as for which Master is responsible for which slot, the Master can be specified by a user as with the first routing strategy, and can also be automatically generated during initialization. It should be noted that, in the node of the in-memory database, only the Master has ownership of the slot, and if it is a Slave node (Slave) of a Master, the Slave is only responsible for using the slot, and has no ownership of the slot.
In different Redis Clusters, the second routing policy may be the same or different, and may be set according to the number of Redis nodes in different Redis Clusters and the difference of attributes.
It should be noted here that the protocols of the first routing policy and the second routing policy may be different, because the object of the first routing policy is a Redis Cluster, and the object of the second routing policy is a memory database node (Redis node), and more specifically, a Master in each Redis Cluster. The first routing policy may be any existing network communication protocol, and the second routing policy may be an existing protocol for Redis Cluster.
Step S230: the method comprises the steps of deploying a first routing strategy in a proxy server, deploying a corresponding second routing strategy in each memory database cluster, enabling the proxy server to route a request from an application to a target memory database cluster according to the first routing strategy, enabling the target memory database cluster to route the request to a corresponding memory database node according to the second routing strategy, and accordingly processing the request.
Deploying a first routing strategy in a proxy server, deploying a second routing strategy in a corresponding memory database cluster, sending a request to the proxy server by an application when the application accesses a Redis database, determining a target memory database cluster determined by the request based on the first routing strategy after the proxy server receives the request, and sending the request to the target memory database cluster; after receiving the request, the target memory database cluster determines a target memory database node determined by the request based on the second routing strategy, and sends the request to the target memory database node, and the target memory database node can respond to or process the request. In some embodiments of the present application, the target in-memory database node may specifically be a Redis shard, where the Redis shard may be in a Master-slave mode, such as but not limited to a one-Master-two-slave mode, and a Master node Master of the Redis shard may respond to or process a request.
Fig. 3 shows a schematic structural diagram of a memory database according to an embodiment of the present application, where the embodiment shown in fig. 3 is a memory database obtained according to the implementation method of the memory database Cluster described above, and as can be seen from fig. 3, in the memory database Cluster obtained by using the implementation method of the memory database Cluster of the present application, a proxy server is arranged between an application and a Redis Cluster, and the proxy server is connected to Redis nodes of a plurality of Redis clusters.
When the application accesses the Redis Cluster, the application is not directly connected with the Redis Cluster any more, but indirectly accesses the Redis Cluster through the proxy server, and the proxy server can route the request of the application to a certain Redis Cluster so as to process the request.
In the prior art, an application can only access one Redis Cluster, in the application, a proxy server can be connected with a plurality of Redis Clusters, the Redis Clusters form a super-large memory database Cluster, and when one application accesses the Redis Cluster, the proxy server accesses the memory database Cluster, so that the service processing capacity of the memory database Cluster as a whole is remarkably improved.
On the other hand, in the prior art, when an application accesses a Redis Cluster, the application needs to be connected with each Redis node of the Redis Cluster; in the application, only one connection is needed between the application and the proxy server, so that the access difficulty of the application accessing the memory database is reduced. In the embodiment of the application, when the proxy server is connected with one Redis Cluster, the characteristic that the Redis Cluster has no central structure can be utilized, the proxy server does not need to be connected with each Redis node of the Redis Cluster, the proxy server only needs to be connected with any one or a plurality of Redis nodes of the Redis Cluster, specifically, the proxy server can be a Master of the Redis node, the number of externally connected Redis nodes is greatly reduced, a multiplexing design is formed, and when the application request instruction can be forwarded by the proxy server, the application request instruction can be merged, so that the connection between one path and a plurality of paths and the Redis node is multiplexed, and the performances of the Redis node and the Redis instance are obviously improved.
As can be seen from the method shown in fig. 2 and the memory database cluster shown in fig. 3, in the present application, by setting a proxy server, the proxy server may connect to a plurality of memory database clusters, and by deploying a first routing policy in the proxy server, a logical partition is implemented, so that the plurality of memory database clusters form a super-large cluster, which can meet the ever-increasing service requirements; the connection between the proxy server and the memory database nodes can be reused, so that the proxy server does not need to be connected with each memory database node in the target memory database cluster, and only needs to be connected with any one memory database node arranged in the memory database cluster, the external connection quantity of the memory database nodes is greatly reduced, and the processing performance of the memory database is remarkably improved; in addition, the application can be directly connected to the proxy server, so that the memory database cluster does not need to be accessed through the client, and the difficulty of accessing the database by the application and the development difficulty of the application are reduced.
In some embodiments of the present application, the method further comprises: and under the condition of meeting the expansion and contraction capacity conditions, adjusting the number of memory database clusters started by the memory database cluster, and/or adjusting the number of memory database nodes of the memory database clusters started by the memory database cluster.
In order to more reasonably configure and utilize hardware resources while meeting the access requirements of applications on databases, under the condition that the memory database clusters meet the expansion and contraction capacity conditions, operation and maintenance personnel can manually increase or decrease the number of memory database nodes of the memory database clusters started by the memory database clusters, and more recently, the number of the memory database nodes of the memory database clusters started by each memory database cluster can be increased or decreased to meet the requirements of different scenes.
The number of enabled memory database clusters and the number of memory database nodes may be simultaneous or separate. In the prior art, the number of memory database nodes in one memory database cluster can be usually adjusted only to enlarge the overall capacity of the memory database cluster, but the number of one memory database node has an upper limit.
In some embodiments of the present application, the method further comprises: acquiring first capacity expansion and reduction index data, wherein the first capacity expansion and reduction index data comprise the number of requests received by a proxy server and/or the number of requests received by each memory database cluster in preset historical time; and if the first scaling index data triggers a preset first scaling strategy, determining that the first scaling condition is met, and adjusting the number of the memory database clusters started by the memory database cluster.
In order to achieve reasonable configuration of resources, in some usage scenarios, it is generally necessary to perform capacity expansion on the memory database cluster. In the present application, there are two scenarios for performing capacity expansion or capacity reduction on the memory database cluster, and first, the number of the memory database cluster may be adjusted to perform capacity expansion or capacity reduction; second, the number of the memory database nodes of each memory database cluster may be adjusted to perform capacity expansion or capacity reduction.
As described above, in the prior art, only the second situation described above generally occurs, that is, only the number of memory database nodes in a memory database cluster can be adjusted, but the upper limit of the number of instances of the memory database cluster is generally 1000, and when the number reaches 1000, the capacity cannot be expanded any more. In the present application, when the instances in each memory database cluster have reached the upper limit, the amount of the memory database clusters may be adjusted to achieve capacity expansion or capacity reduction.
Whether capacity expansion or capacity reduction needs to be carried out on the memory database cluster can be determined according to the first capacity expansion and reduction index data and a preset first capacity expansion and reduction strategy, wherein the first capacity expansion and reduction strategy can be formulated according to the existing bearing capacity of the memory database cluster.
For example, in one embodiment, the memory database cluster includes 3 memory database clusters, each memory database cluster has 768 memory database nodes, and assuming that each node can process 1 request within the same preset time period, the service throughput of one memory database cluster in the preset time period is 768, and the service throughput of the memory database cluster in the preset time period is 2304. A first capacity expansion and reduction strategy can be set according to the performance of the memory database cluster, for example, if the first upper limit value of the capacity expansion of the whole memory database cluster is 2300 and the first lower limit value is 700, a second upper limit value and a second lower limit value of the capacity expansion of each memory database cluster can also be set in the first capacity expansion and reduction strategy, for example, the second upper limit value of the capacity expansion of the memory database node cluster is 750 and the second lower limit value is 350. In the first capacity expansion and reduction strategy, corresponding capacity expansion and reduction strategies under different scenes can be set, if the first capacity expansion and reduction index data can be set to be larger than the preset upper limit value of the first capacity expansion and reduction strategy, the first capacity expansion and reduction index data triggers the preset first capacity expansion strategy, namely, the first capacity expansion condition is determined to be met, and capacity expansion is required; if the first capacity expansion and reduction index data is smaller than the preset lower limit value of the first capacity expansion and reduction strategy, that is, the first capacity expansion and reduction index data is determined to meet the first capacity reduction condition, the first capacity expansion and reduction index data triggers the preset first capacity reduction strategy, and the strategies such as capacity reduction are required.
If the first capacity expansion and reduction index is the first request quantity received by the proxy server in the preset historical time, if the first capacity expansion and reduction index is larger than a first upper limit value, capacity expansion is performed, specifically, the quantity of the memory database clusters can be increased, and the quantity can be determined according to the first request quantity and/or the memory occupation condition of the memory database clusters, and the like; and if the first capacity expansion and reduction index is smaller than the first lower limit value, performing capacity reduction, and determining the number of the specifically reduced memory database clusters according to the first request number and/or the memory occupation condition of the memory database clusters, and the like.
For example, in some embodiments of the present application, the first capacity expansion and reduction index is a first number of requests received by the proxy server within a preset historical time, and if the first number of requests is 600, that is, the first number of requests is smaller than a first lower limit value, it indicates that there are many idle resources in the memory database cluster. If the first request number is 2500, that is, the first request number is greater than the first upper limit value, it indicates that the first request number exceeds the maximum bearing capacity of the whole memory database cluster, in this case, capacity expansion can be performed, and specifically, the number of the memory database clusters can be increased to improve the service processing capacity of the memory database cluster.
If the first capacity expansion and contraction index is the number of second requests received by the proxy server in each memory database cluster in the preset historical time, the second request number can represent the number of requests sent by the application received by a certain memory database cluster in a period of time, and if the number of the requests received by some or all memory database clusters exceeds the bearing capacity of the certain memory database clusters, the number of the memory database clusters can be increased to solve the problem; if the number of requests received by some or all of the in-memory database clusters is much smaller than the self-bearing capacity, the capacity reduction can be performed, specifically, the number of in-memory database clusters can be reduced. If the number of the second requests received by each memory database cluster is greater than 750, which indicates that the number of the requests exceeds the bearing capacity of each memory database cluster, the above problem can be solved by increasing the number of the memory database clusters. If the number of the second requests received by each memory database cluster is less than 350, the workload of each memory database cluster is in an unsaturated state, and the number of the memory database clusters can be deleted to realize the reallocation of resources.
In some embodiments of the present application, the first request quantity and the second request quantity may also be considered comprehensively, that is, the operation conditions of the entire memory database cluster and each memory database cluster are considered, and this condition may also be set in the first capacity expansion and reduction policy, and capacity expansion is performed when the first request quantity and the second request quantity simultaneously reach above corresponding upper limit values; if the first expansion and contraction capacity index is the sum of the first request quantity and the second request quantity, and the expansion is performed only when the sum of the first request quantity and the second request quantity reaches a preset threshold value.
In some embodiments of the present application, the adding or deleting of the corresponding memory database cluster may refer to the following embodiments, and in some embodiments of the present application, the expanding the number of the memory database clusters enabled by the memory database cluster includes: creating a new memory database cluster; copying the slot to be migrated in each enabled memory database cluster and the corresponding data thereof to a new memory database cluster according to a first preset partition rule; deleting the slot to be migrated and the corresponding data in each enabled memory database cluster; and updating the first routing strategy and the second routing strategy according to the first preset partition rule.
Taking the expansion of two memory database clusters as an example, first, two new memory database clusters are created, which are marked as a memory database cluster a and a memory database cluster B, and the new memory database cluster is an empty memory database cluster.
If the original memory database cluster has 3 memory database clusters in total, which are recorded as memory database cluster 1, memory database cluster 2 and memory database cluster 3, the memory database cluster has 49159 slots, and the original partition rule is the average allocation principle, that is, the memory database cluster 1 includes the slot with the number of 0-16383, the memory database cluster 2 includes the slot with the number of 16384-32767, and the memory database cluster 3 includes the slot with the number of 32768-49159.
Under the condition that capacity expansion is needed, according to a first preset partition rule, re-partitioning the original partial slots of each memory database cluster and data corresponding to the slots, and determining the partition to be migrated. Further, the obtained slot (Key) to be migrated in the partition to be migrated and the corresponding data (Value) are migrated to the newly added memory database cluster. Assuming that the first predetermined partition rule includes 10000 slots for each memory database cluster, in this embodiment, the partition to be migrated can be determined to be the slot with numbers 10000-16383 of the memory database cluster 1, the slot with numbers 26384-32767 of the memory database cluster 2, and the slot with numbers 42768-49158 of the memory database cluster 3.
During migration, the slot with the number of 10000-16383 and the corresponding data contained in the memory database cluster 1 can be migrated into the memory database cluster A, and the slot with the number of 26384-29999 and the corresponding data contained in the memory database cluster 2 can be migrated into the memory database cluster A; the slot with the number of 30000-32767 and the corresponding data contained in the memory database cluster 2 are migrated to the memory database cluster B, and the slot with the number of 42768-49159 and the corresponding data contained in the memory database cluster 3 are migrated to the memory database cluster B. Other allocation methods can also be adopted, as long as the guarantee that the number of the slots of each memory database cluster is not more than 10000.
In the prior art, migration is a process of migration with deletion, for example, multiple slots of a memory database cluster C and data corresponding to the multiple slots are migrated to a memory database cluster D one by one, and "migration with deletion" refers to that, when a slot and data corresponding to the slot are migrated, records of the slot and the data corresponding to the slot in the memory database cluster C are deleted immediately, and a memory is released. The problem of this migration method is that the migration requires a certain time, and in the migration process, the migrated data cannot be accessed by the application, that is, in the migration process, part of the functions of the memory database cluster C are suspended, which seriously affects the access of the application and reduces the user experience.
Different from the prior art, in some embodiments of the present application, migration is based on a concept of "copy first and then delete", and a specific migration process may be: copying the slot to be migrated and the corresponding data in each enabled memory database cluster to a new memory database cluster; more specifically, a Redis migration instance may be created in each memory database cluster, that is, one or more memory database nodes are used as the Redis migration instance for migrating the slot to be migrated in the memory database cluster and the data corresponding to the slot to be migrated. And copying and deleting the slot to be migrated and the data corresponding to the slot to be migrated by adopting the Redis migration instance.
Finally, updating the first routing strategy and the second routing strategy according to the first preset partition rule, wherein if the first routing strategy is modeled as 49159 under the condition of the original partition rule, the data key comprises a request of 0-16383, and the request is routed to the memory database cluster 1; the data key contains 16384-32767 requests that are routed to memory database cluster 2; the data key contains requests 32768-49159, which are routed into the in-memory database cluster 3. After updating, the mode of the first routing strategy is still 49159, and the data key comprises requests of 0-9999 and is routed to the memory database cluster 1; data keys comprise 16384-26383 requests that are routed to in-memory database cluster 2; the data key comprises a request of 32768-42767, and is routed to the memory database cluster 3; data keys containing 10000-16383 or 26384-29999 requests are routed to the in-memory database cluster A; the data key contains requests 30000-32767 or 42768-49158, which are routed to the in-memory database cluster.
In the case that the first routing policy is updated, the second routing policy also needs to be adaptively adjusted, which is not described herein again.
In some usage scenarios, it may also be generally desirable to scale a memory database cluster. The capacity reduction also has two scenarios, one is to reduce the number of the memory database clusters of the memory database cluster, and the other is to reduce the capacity of the memory database nodes of each memory database cluster.
The following method can be adopted in the capacity reduction of the number of the memory database clusters of the memory database cluster: copying the slot of the memory database cluster to be deactivated and the corresponding data thereof to other enabled memory database clusters according to a first preset partition rule; deleting the slot in each internal memory database cluster to be deactivated and the corresponding data; deleting the memory database cluster to be deactivated; and updating the first routing strategy and the second routing strategy according to the first preset partition rule.
In general, each memory database cluster has an idle space available, for example, if one memory database cluster includes 16383 slots and actually occupies only 5000 slots, the occupied slots and corresponding data in other memory database clusters can be migrated, so as to reduce the memory occupation of other memory database clusters. For example, the memory database cluster 1 actually occupies 5000 slots, and the number is 0 to 4999; the memory database cluster 2 actually occupies 5000 slots, the number is also 0-4999, if the memory database cluster 1 and the memory database cluster 2 are used together, resource waste is caused, at this time, capacity reduction can be performed, specifically, all occupied slots in the memory database cluster 2 and data corresponding to the occupied slots are migrated to the memory database cluster 1, at this time, the memory database cluster 1 occupies 10000 slots and is still within the bearing capacity range, and further, the memory database cluster 2 can be deleted to be used for other purposes.
Specifically, the migration process may refer to the prior art, and the aforementioned recommendation method of the present application may also be used.
And finally, updating the first routing strategy and the second routing strategy according to a first preset partition rule, wherein if the number of each slot of the memory database cluster is reset to be 0-9999 and the module of the memory database cluster is 10000, the request of the data key containing 0-9999 can be routed to the memory database cluster 1.
In some embodiments of the present application, the method further comprises: acquiring second scaling index data, wherein the second scaling index data comprises the number of requests received by each memory database node of each memory database cluster within preset time; and if the second capacity expansion and reduction index data trigger a preset second capacity expansion and reduction strategy, determining that a second capacity expansion and reduction condition is met, and adjusting the number of memory database nodes of the memory database cluster started by the memory database cluster.
The second capacity expansion and reduction index represents the number of requests received by each memory database node within a preset time, and under certain circumstances, the memory database nodes of each memory database cluster can be adjusted without increasing or reducing the number of the memory database clusters, so as to achieve the purpose of capacity expansion and reduction.
If the second expansion and contraction content index data is larger than the preset upper limit value of the second expansion and contraction content strategy, the second expansion and contraction content index data triggers the preset second expansion strategy, namely the second expansion and contraction content condition is met, and at the moment, the number of the nodes of the memory database can be increased to expand the capacity. It should be noted that a precondition that a certain memory database cluster can expand the memory database node is that the instance of the memory database cluster has not yet reached its upper limit value, and under the passing condition, the maximum value of the upper limit value is 1000, taking a Master-slave mode as an example, that is, the number of Master nodes in the memory database node of one memory database cluster cannot be more than 330.
The capacity expansion mode of the memory database node can also adopt a method similar to the capacity expansion mode of the memory database cluster, and specifically, a new memory database node is created; copying the slot to be migrated and the corresponding data in the memory database node of each memory database cluster to a new memory database node according to a second preset partition rule; deleting the slot to be migrated and the corresponding data in the memory database node of each memory database cluster; and updating the first routing strategy and the second routing strategy according to a second preset partition rule.
Similarly, if the second capacity expansion and reduction index data is smaller than the preset lower limit value of the second capacity expansion and reduction strategy, the second capacity expansion and reduction index data triggers the preset second capacity reduction strategy, that is, the second capacity expansion and reduction condition is met, at this time, capacity reduction can also be performed on the memory database nodes of each memory database cluster, and specifically, according to the second preset partition rule, the slots of the memory database nodes to be deactivated and the corresponding data thereof are copied to other enabled memory database nodes; deleting the slot in each memory database node to be deactivated and the corresponding data; deleting the memory database nodes to be deactivated; and updating the first routing strategy and the second routing strategy according to a second preset partition rule.
It should be noted that, in some embodiments of the present application, when migrating a slot in a node of a memory database and data corresponding to the slot, a copy-delete mode may also be used, which does not affect access of an application to the memory database, and ensures high availability of the memory database.
Fig. 4 is a flowchart illustrating a method for implementing a memory database cluster according to another embodiment of the present application, where a first routing policy is generated according to a data key and an attribute of the memory database cluster, and the first routing policy is deployed in a proxy server.
And generating a second routing strategy of each memory database cluster according to the data key and the attribute of each memory database node in each memory database cluster, deploying the corresponding second routing strategy in each memory database cluster, and establishing an initial memory database cluster.
And acquiring first scaling index data, and judging whether the first scaling index data triggers a first scaling strategy or not. And if the first capacity expansion and reduction index data trigger the first capacity expansion strategy, increasing the number of the memory database clusters, and updating the first routing strategy and the second routing strategy.
And if the first capacity expansion and reduction index data trigger the first capacity reduction strategy, reducing the number of the memory database clusters, and updating the first routing strategy and the second routing strategy.
And acquiring second capacity expansion and reduction index data, and judging whether the second capacity expansion and reduction index data triggers a second capacity expansion and reduction strategy. And if the second capacity expansion and reduction index data triggers a second capacity expansion strategy, increasing the number of the nodes of the memory database, and updating the first routing strategy and the second routing strategy again.
And if the second capacity expansion and reduction index data triggers a second capacity reduction strategy, reducing the number of the nodes of the memory database, and updating the first routing strategy and the second routing strategy again.
And after receiving the application request, processing the request based on the first routing strategy and the second routing strategy which are updated again.
Fig. 5 is a schematic structural diagram illustrating an apparatus for implementing a memory database cluster according to an embodiment of the present application, where the apparatus 500 is disposed in a control server, and as can be seen from fig. 5, the apparatus 500 includes:
the first generating device 510 is configured to generate a first routing policy according to the data key and the attribute of the in-memory database cluster.
The first routing strategy is used for being deployed in the proxy server, and the applied request can be distributed to a target memory database cluster corresponding to the request according to the first routing strategy.
In some embodiments of the present application, the attribute of the memory database Cluster may be a number of the memory database Cluster, the first routing policy is a rule for mapping a Key to which Redis Cluster, the Key is mapped to which Redis Cluster, the Redis Cluster provides a service for the Key, and the Redis Cluster is responsible for the Key and is determined by specific content of the first routing policy. As mentioned above, the Redis shards are to distribute data to the Redis instance group, and in a scenario where there are multiple Redis clusters, it is not efficient to rely on the Redis shards only for data routing. The method solves the problem by logically partitioning the Redis Cluster at a proxy layer, and particularly, the method connects a plurality of Redis Clusters through the proxy server, deploys a first routing strategy in the proxy server, and can realize that one Key is routed to which Redis Cluster based on the first routing strategy, thereby realizing the logical partitioning of the Redis Clusters.
The first routing policy can be automatically generated during initialization, such as by adopting the existing redis-tri. Rb script; or may be specified by the user, that is, developed by the user, for example, based on the principle of average distribution, and the like, and the present application is not limited thereto.
A second generating device 520, configured to generate a second routing policy for each memory database cluster according to the data key and the attribute of each memory database node in each memory database cluster;
and the second routing strategy is deployed in each Redis Cluster and is deployed according to the second routing strategy. Each Redis Cluster can send the application request received by the Redis Cluster to a target memory database node in the Redis Cluster.
The second routing policy is generated by data keys and attributes of memory database nodes (Redis nodes) in memory database clusters, for example, a Redis Cluster includes 16384 slots with numbers of 0, 1, 2, 3 \8230 \ 8230, 16382, 16383, it should be noted that the slot is a virtual slot and does not really exist. In normal operation, the Master of each Redis node in the Redis Cluster is responsible for a part of the slots, for example, a Redis Cluster has 3 Redis nodes, the Master nodes of the 3 Redis nodes are Master 1, master 2 and Master 3 respectively, master 1 is responsible for the slots with the number of 0-4999, master 2 is responsible for the slots with the number of 5000-9999, and Master 3 is responsible for the slots with the number of 10000-16383.
When a certain Key is mapped to a slot which is responsible for a certain Master, the Master is responsible for providing services for the Key, and as for which Master is responsible for which slot, the Master can be specified by a user as the first routing strategy, and can also be automatically generated during initialization. It should be noted that, in the memory database node, only the Master has ownership of the slot, and if it is a slave node (slave) of a Master, the slave is only responsible for the use of the slot, and has no ownership of the slot.
In different Redis Clusters, the second routing policy may be the same or different, and may be set according to the number of Redis nodes in different Redis Clusters and the difference of attributes.
It should be noted here that the protocols of the first routing policy and the second routing policy may be different, because the object of the first routing policy is a Redis Cluster, and the object of the second routing policy is a memory database node (Redis node), and more specifically, a Master in each Redis Cluster. The first routing policy may be any existing network communication protocol, and the second routing policy may be an existing protocol for Redis Cluster.
A deploying device 530, configured to deploy the first routing policy in the proxy server, and deploy a corresponding second routing policy in each memory database cluster, so that the proxy server can route the request from the application to the target memory database cluster according to the first routing policy, and the target memory database cluster can route the request to the corresponding memory database node according to the second routing policy, thereby processing the request.
Deploying a first routing strategy in a proxy server, deploying a second routing strategy in a corresponding memory database cluster, sending a request to the proxy server by an application when the application accesses a Redis database, determining a target memory database cluster determined by the request based on the first routing strategy after the proxy server receives the request, and sending the request to the target memory database cluster; after receiving the request, the target memory database cluster determines a target memory database node determined by the request based on the second routing strategy, and sends the request to the target memory database node, and the target memory database node can respond or process according to the request. In some embodiments of the present application, the target in-memory database node may specifically be a Redis shard, where the Redis shard may be in a Master-slave mode, such as but not limited to a one-Master-two-slave mode, and a Master node Master of the Redis shard may respond to or process the request.
By the device shown in fig. 5, the proxy server is arranged, the proxy server can be connected with a plurality of memory database clusters, and logical partitioning is realized by deploying the first routing strategy in the proxy server, so that the plurality of memory database clusters form a super-large cluster, and the ever-increasing service requirements can be met; the connection between the proxy server and the memory database nodes can be reused, so that the proxy server does not need to be connected with each memory database node in the target memory database cluster, and only needs to be connected with any one memory database node arranged in the memory database cluster, the external connection quantity of the memory database nodes is greatly reduced, and the processing performance of the memory database is remarkably improved; in addition, because the application can be directly connected to the proxy server, the memory database cluster does not need to be accessed through the client, and the difficulty of accessing the database by the application and the development difficulty of the application are reduced.
In some embodiments of the present application, the apparatus further comprises: and the content expansion and reduction unit is used for adjusting the number of the memory database clusters started by the memory database cluster and/or adjusting the number of the memory database nodes of the memory database clusters started by the memory database cluster under the condition that the content expansion and reduction condition is met.
In some embodiments of the present application, the apparatus further comprises: the system comprises a capacity expansion and reduction unit, a capacity expansion and reduction unit and a capacity expansion and reduction unit, wherein the capacity expansion and reduction unit is used for acquiring first capacity expansion and reduction index data, and the first capacity expansion and reduction index data comprise the number of requests received by a proxy server and/or the number of requests received by each memory database cluster within preset historical time; and if the first capacity expansion and reduction index data trigger a preset first capacity expansion and reduction strategy, determining that a first capacity expansion and reduction condition is met, and adjusting the number of the memory database clusters started by the memory database clusters.
In some embodiments of the present application, if the first scaling index data is greater than the preset upper limit value of the first scaling strategy, the first scaling index data triggers a preset first scaling strategy; the capacity expansion and contraction unit is used for creating a new memory database cluster; copying the to-be-migrated slot in each enabled memory database cluster and the corresponding data thereof to a new memory database cluster according to a first preset partition rule; deleting the slot to be migrated and the corresponding data in each enabled memory database cluster; and updating the first routing strategy and the second routing strategy according to the first preset partition rule.
In some embodiments of the present application, if the first capacity expansion and reduction index data is smaller than a preset lower limit value of the first capacity expansion and reduction strategy, the first capacity expansion and reduction index data triggers a preset first capacity reduction strategy; the capacity expansion and contraction unit is used for copying the slots of the memory database cluster to be deactivated and the corresponding data thereof to other enabled memory database clusters according to a first preset partition rule; deleting the slot in each internal memory database cluster to be deactivated and the corresponding data; deleting the memory database cluster to be deactivated; and updating the first routing strategy and the second routing strategy according to the first preset partition rule.
In some embodiments of the present application, the capacity expansion and reduction unit is further configured to obtain second capacity expansion and reduction index data, where the second capacity expansion and reduction index data includes a number of requests received by each memory database node of each memory database cluster within a preset time; and if the second capacity expansion and reduction index data trigger a preset second capacity expansion and reduction strategy, determining that a second capacity expansion and reduction condition is met, and adjusting the number of memory database nodes of the memory database cluster started by the memory database cluster.
In some embodiments of the present application, if the second capacity expansion and reduction index data is greater than the preset upper limit value of the second capacity expansion and reduction strategy, the second capacity expansion and reduction index data triggers a preset second capacity expansion strategy; the capacity expansion and contraction unit is used for creating a new memory database node; copying the slot to be migrated and the corresponding data in the memory database node of each memory database cluster to a new memory database node according to a second preset partition rule; deleting the slot to be migrated and the corresponding data in the memory database node of each memory database cluster; and updating the first routing strategy and the second routing strategy according to a second preset partition rule.
In some embodiments of the present application, if the second scaling factor data is smaller than the preset lower limit of the second scaling strategy, the second scaling factor data triggers the preset second scaling strategy; the capacity expansion and reduction unit is used for copying the slots of the memory database nodes to be deactivated and the corresponding data thereof to other activated memory database nodes according to a second preset partition rule; deleting the slot in each memory database node to be deactivated and the corresponding data; deleting the memory database nodes to be deactivated; and updating the first routing strategy and the second routing strategy according to a second preset partition rule.
It can be understood that, the implementation apparatus for a memory database cluster can implement the steps of the implementation method for a memory database cluster executed by the control server provided in the foregoing embodiment, and the explanations related to the implementation method for a memory database cluster are applicable to the implementation apparatus for a memory database cluster, and are not described herein again.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application. Referring to fig. 6, at a hardware level, the electronic device includes a processor, and optionally further includes an internal bus, a network interface, and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory, such as at least 1 disk Memory. Of course, the electronic device may also include hardware required for other services.
The processor, the network interface, and the memory may be connected to each other via an internal bus, which may be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 6, but that does not indicate only one bus or one type of bus.
And the memory is used for storing programs. In particular, the program may include program code comprising computer operating instructions. The memory may include both memory and non-volatile storage and provides instructions and data to the processor.
The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to form the implementation device of the memory database cluster on the logic level. The processor is used for executing the program stored in the memory and is specifically used for executing the following operations:
generating a first routing strategy according to the data key and the attribute of the memory database cluster;
generating a second routing strategy of each memory database cluster according to the data key and the attribute of each memory database node in each memory database cluster;
the method comprises the steps of deploying a first routing strategy in a proxy server, deploying a corresponding second routing strategy in each memory database cluster, enabling the proxy server to route a request from an application to a target memory database cluster according to the first routing strategy, enabling the target memory database cluster to route the request to a corresponding memory database node according to the second routing strategy, and accordingly processing the request.
The method performed by the apparatus for implementing a memory database cluster according to the embodiment shown in fig. 5 of the present application may be applied to a processor, or implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
The electronic device may further execute the method executed by the apparatus for implementing a memory database cluster in fig. 5, and implement the function of the apparatus for implementing a memory database cluster in the embodiment shown in fig. 5, which is not described herein again in this embodiment of the present application.
An embodiment of the present application further provides a computer-readable storage medium storing one or more programs, where the one or more programs include instructions, which, when executed by an electronic device including multiple application programs, enable the electronic device to perform a method performed by an implementation apparatus of a memory database cluster in the embodiment shown in fig. 5, and are specifically configured to perform:
generating a first routing strategy according to the data key and the attribute of the memory database cluster;
generating a second routing strategy of each memory database cluster according to the data key and the attribute of each memory database node in each memory database cluster;
the method comprises the steps of deploying a first routing strategy in a proxy server, deploying a corresponding second routing strategy in each memory database cluster, enabling the proxy server to route a request from an application to a target memory database cluster according to the first routing strategy, enabling the target memory database cluster to route the request to a corresponding memory database node according to the second routing strategy, and accordingly processing the request.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of other like elements in a process, method, article, or apparatus comprising the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement or the like made within the spirit and principle of the present application shall be included in the scope of the claims of the present application.

Claims (13)

1. A method for implementing a memory database cluster, wherein the method is executed by a control server, and comprises the following steps:
generating a first routing strategy according to the data key and the attribute of the memory database cluster;
generating a second routing strategy of each memory database cluster according to the data key and the attribute of each memory database node in each memory database cluster;
and deploying the first routing strategy in a proxy server, deploying a corresponding second routing strategy in each memory database cluster, enabling the proxy server to route a request from an application to a target memory database cluster according to the first routing strategy, and enabling the target memory database cluster to route the request to a corresponding memory database node according to the second routing strategy, thereby processing the request.
2. The method of claim 1, wherein the method further comprises: in the case where the condition of the expansion/contraction capacity is satisfied,
and adjusting the number of memory database clusters started by the memory database cluster, and/or adjusting the number of memory database nodes of the memory database cluster started by the memory database cluster.
3. The method of claim 2, wherein the method further comprises:
acquiring first capacity expansion and reduction index data, wherein the first capacity expansion and reduction index data comprise the number of requests received by a proxy server and/or the number of requests received by each memory database cluster in preset historical time;
and if the first capacity expansion and reduction index data trigger a preset first capacity expansion and reduction strategy, determining that a first capacity expansion and reduction condition is met, and adjusting the number of memory database clusters started by the memory database cluster.
4. The method according to claim 3, wherein if the first scaling index data is greater than a preset upper limit of a first scaling strategy, the first scaling index data triggers a preset first scaling strategy;
the adjusting the number of memory database clusters enabled by the memory database cluster comprises:
creating a new memory database cluster;
copying the slot to be migrated in each enabled memory database cluster and the corresponding data thereof to the new memory database cluster according to a first preset partition rule;
deleting the slot to be migrated and the corresponding data in each enabled memory database cluster;
and updating the first routing strategy and the second routing strategy according to the first preset partition rule.
5. The method of claim 3, wherein the first scalability index data triggers a preset first scalability strategy if the first scalability index data is less than a preset lower limit of the first scalability strategy;
the adjusting the number of memory database clusters enabled by the memory database cluster comprises:
copying the slot of the memory database cluster to be deactivated and the corresponding data thereof to other enabled memory database clusters according to a first preset partition rule;
deleting the slot in each internal memory database cluster to be deactivated and the corresponding data thereof;
deleting the memory database cluster to be deactivated;
and updating the first routing strategy and the second routing strategy according to the first preset partition rule.
6. The method of claim 2, wherein the method further comprises:
acquiring second capacity expansion and reduction index data, wherein the second capacity expansion and reduction index data comprise the number of requests received by each memory database node of each memory database cluster within preset time;
and if the second capacity expansion and reduction index data trigger a preset second capacity expansion and reduction strategy, determining that a second capacity expansion and reduction condition is met, and adjusting the number of memory database nodes of the memory database cluster started by the memory database cluster.
7. The method of claim 6, wherein the second scaling index data triggers a second preset scaling strategy if the second scaling index data is greater than a second preset upper limit of the second scaling strategy;
the adjusting the number of memory database nodes of the memory database cluster enabled by the memory database cluster comprises:
creating a new memory database node;
copying a slot to be migrated and corresponding data in the memory database node of each memory database cluster to the new memory database node according to a second preset partition rule;
deleting the slot to be migrated and the corresponding data in the memory database node of each memory database cluster;
and updating the first routing strategy and the second routing strategy according to the second preset partition rule.
8. The method according to claim 6, wherein the second scalability index data triggers a preset second scalability strategy if the second scalability index data is smaller than a preset lower limit of the second scalability strategy;
the adjusting the number of memory database nodes of the memory database cluster enabled by the memory database cluster comprises:
copying the slots of the memory database nodes to be deactivated and the corresponding data thereof to other enabled memory database nodes according to a second preset partition rule;
deleting the slot in each memory database node to be deactivated and the corresponding data;
deleting the memory database nodes to be deactivated;
and updating the first routing strategy and the second routing strategy according to the second preset partition rule.
9. An apparatus for implementing a memory database cluster, wherein the apparatus is disposed in a control server, the apparatus comprising:
the first generating device is used for generating a first routing strategy according to the data key and the attribute of the memory database cluster;
a second generating device, configured to generate a second routing policy for each memory database cluster according to the data key and the attribute of each memory database node in each memory database cluster;
and the deployment device is used for deploying the first routing strategy in the proxy server and deploying the corresponding second routing strategy in each memory database cluster, so that the proxy server can route the request from the application to the target memory database cluster according to the first routing strategy, and the target memory database cluster can route the request to the corresponding memory database node according to the second routing strategy, thereby processing the request.
10. A memory database cluster comprises a proxy server and a plurality of memory database clusters, wherein the proxy server and the memory database clusters are respectively in communication connection, and the proxy server can be in communication connection with one or more applications;
a first routing strategy is deployed in the proxy server, and the first routing strategy is generated according to the data key and the attribute of the memory database cluster; the proxy server is used for routing the request from the application to a target memory database cluster according to the first routing strategy;
a second routing strategy is respectively deployed in each memory database cluster, and the second routing strategy is generated according to the data key and the attribute of each memory database node in each memory database cluster;
and the memory database cluster is used for routing the request to a corresponding memory database node according to a second routing strategy so as to process the request.
11. The memory database cluster according to claim 10, wherein said proxy server comprises one or more proxy nodes, each of said memory database clusters comprising one or more memory database nodes;
each agent node is in communication connection with any one memory database node in each memory database cluster;
the first routing strategy is deployed in each agent node; the agent node is used for routing a request from an application to a memory database node which is in communication connection with the agent node in a target memory database cluster according to a first routing strategy deployed in the agent node;
the second routing strategies are respectively deployed in the memory database nodes which are in communication connection with the agent nodes;
the memory database node is configured to route the request to a corresponding memory database node according to a second routing policy deployed in the memory database node, so as to process the request.
12. An electronic device, comprising:
a processor; and
a memory arranged to store computer executable instructions which, when executed, cause the processor to perform the method of any one of claims 1 to 8.
13. A computer readable storage medium storing one or more programs which, when executed by an electronic device comprising a plurality of application programs, cause the electronic device to perform the method of any of claims 1-8.
CN202111088670.XA 2021-09-16 2021-09-16 Memory database cluster and implementation method and device thereof Pending CN115827745A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111088670.XA CN115827745A (en) 2021-09-16 2021-09-16 Memory database cluster and implementation method and device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111088670.XA CN115827745A (en) 2021-09-16 2021-09-16 Memory database cluster and implementation method and device thereof

Publications (1)

Publication Number Publication Date
CN115827745A true CN115827745A (en) 2023-03-21

Family

ID=85515104

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111088670.XA Pending CN115827745A (en) 2021-09-16 2021-09-16 Memory database cluster and implementation method and device thereof

Country Status (1)

Country Link
CN (1) CN115827745A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116471274A (en) * 2023-06-20 2023-07-21 深圳富联富桂精密工业有限公司 Database node deployment method and device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116471274A (en) * 2023-06-20 2023-07-21 深圳富联富桂精密工业有限公司 Database node deployment method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US10242022B1 (en) Systems and methods for managing delayed allocation on clustered file systems
CN113296792B (en) Storage method, device, equipment, storage medium and system
CN110096220B (en) Distributed storage system, data processing method and storage node
WO2017050064A1 (en) Memory management method and device for shared memory database
JP7467593B2 (en) Resource allocation method, storage device, and storage system - Patents.com
CN110018878B (en) Distributed system data loading method and device
CN112000287B (en) IO request processing device, method, equipment and readable storage medium
EP3432132B1 (en) Data storage method and device
US20200401329A1 (en) Opportunistic storage service
WO2023056797A1 (en) Blockchain-based data processing method, apparatus, and device, and storage medium
CN111045802B (en) Redis cluster component scheduling system and method and platform equipment
CN115827745A (en) Memory database cluster and implementation method and device thereof
CN112988062B (en) Metadata reading limiting method and device, electronic equipment and medium
CN111722908B (en) Virtual machine creating method, system, equipment and medium
CN115934354A (en) Online storage method and device
CN115794396A (en) Resource allocation method, system and electronic equipment
CN115618409A (en) Database cloud service generation method, device, equipment and readable storage medium
CN115328608A (en) Kubernetes container vertical expansion adjusting method and device
CN113986846A (en) Data processing method, system, device and storage medium
CN116483740B (en) Memory data migration method and device, storage medium and electronic device
US11768704B2 (en) Increase assignment effectiveness of kubernetes pods by reducing repetitive pod mis-scheduling
WO2023231572A1 (en) Container creation method and apparatus, and storage medium
CN116501258A (en) Disk group dividing method, device, medium and related equipment based on Minio
CN115865948A (en) Virtual machine mirror image data distribution method, device, equipment and storage medium
CN116155891A (en) Network access method and device based on edge computing equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination