CN114500576A - Distributed cache capacity expansion and reduction method, system, device and storage medium - Google Patents

Distributed cache capacity expansion and reduction method, system, device and storage medium Download PDF

Info

Publication number
CN114500576A
CN114500576A CN202111614757.6A CN202111614757A CN114500576A CN 114500576 A CN114500576 A CN 114500576A CN 202111614757 A CN202111614757 A CN 202111614757A CN 114500576 A CN114500576 A CN 114500576A
Authority
CN
China
Prior art keywords
information
node server
reduction
cache
capacity expansion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111614757.6A
Other languages
Chinese (zh)
Inventor
吴林江
鄢智勇
陈文华
张力方
区锦荣
程僚
高磊琦
卢潭城
杨丰嘉
欧稳先
李嘉瑛
洪瀚思
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Cloud Technology Co Ltd
Original Assignee
Tianyi Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi Cloud Technology Co Ltd filed Critical Tianyi Cloud Technology Co Ltd
Priority to CN202111614757.6A priority Critical patent/CN114500576A/en
Publication of CN114500576A publication Critical patent/CN114500576A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application provides a distributed cache capacity expansion and reduction method, a system, a device and a storage medium, wherein the method comprises the following steps: after receiving the capacity expansion and reduction information aiming at the target memory, generating an adjusted second consistent Hash diagram according to the first consistent Hash diagram before adjustment and the capacity expansion and reduction information; obtaining cache contents corresponding to the expansion and contraction capacity information, and determining a target node server in a second consistency Hash diagram to which the cache contents corresponding to the expansion and contraction capacity information are moved based on a preset rule; and carrying out capacity expansion and reduction processing on the distributed cache according to the capacity expansion and reduction information, and copying cache contents corresponding to the capacity expansion and reduction information to the target node server. According to the scheme, the generation of invalid data in the updating process is effectively avoided, so that the phenomenon of large access volume of invalid contents is effectively avoided, the technical scheme that the bandwidth is increased suddenly in a short time is avoided, and the poor service quality and better user experience are ensured.

Description

Distributed cache capacity expansion and reduction method, system, device and storage medium
Technical Field
The application relates to a distributed cache capacity expansion and reduction method, a system, a device and a storage medium, belonging to the technical field of network.
Background
The content distribution network caches the content of the client through the cache node servers distributed at the edge nodes and provides services for the user instead of the client node servers so as to reduce the access pressure of the client. In an internet scene, a large amount of data and a large amount of users are usually faced, a plurality of cache node servers are needed to form distributed storage, and all client resources cannot be stored in full due to limited cache space of the cache node servers. Therefore, a method is needed for each node server to cache different files to avoid duplication and to ensure the file access hit rate after node adjustment. The scheme commonly used in the industry at present is to use a consistent hashing algorithm to determine the relationship between a file and a cache node server according to file url path hashing, and distribute the file in the cache node server.
However, the consistent hash algorithm still causes partial cache content loss when the cache node server expands and contracts the content, if manual intervention is not performed, the cache needs to be downloaded again, and if the access quantity of the invalid content is large, the bandwidth of the cache is increased suddenly in a short time, and the service quality is affected.
In summary, a technical solution that does not cause a sudden increase of bandwidth in a short time when the access amount of the failed content is large is lacking in the prior art, which causes poor service quality and affects user experience.
Disclosure of Invention
The application provides a distributed cache capacity expansion and reduction method device and a storage medium, which are used for solving the problems that the prior art lacks a technical scheme which can not cause bandwidth surge in a short time when the access quantity of invalid contents is large, so that the service quality is poor and the user experience is influenced.
In a first aspect, a method for expanding and reducing capacity of a distributed cache is provided according to an embodiment of the present application, where the method includes:
after receiving the capacity expansion and reduction information aiming at the target memory, generating an adjusted second consistent Hash diagram according to the first consistent Hash diagram before adjustment and the capacity expansion and reduction information;
obtaining cache contents corresponding to the expansion and contraction capacity information, and determining a target node server in a second consistency Hash diagram to which the cache contents corresponding to the expansion and contraction capacity information are moved based on a preset rule;
and carrying out capacity expansion and reduction processing on the distributed cache according to the capacity expansion and reduction information, and copying cache contents corresponding to the capacity expansion and reduction information to the target node server.
In one embodiment, the method further comprises:
storing the cache content corresponding to the expansion and contraction content information to a transit node server;
after carrying out capacity expansion and reduction processing on the distributed cache according to the capacity expansion and reduction information, taking out cache contents corresponding to the capacity expansion and reduction information from the transit node server and storing the cache contents into a target node server; the transit node server is a node server which exists in the first consistent Haiches graph and does not need to be adjusted.
In one embodiment, the method further comprises:
updating a first pre-stored hash distribution map so as to call cache content corresponding to user access information based on an updated second hash distribution map when the user access information is received;
the user access information is the received request information of the content in the azimuth node server.
In an embodiment, the obtaining of the cache content corresponding to the scaling information and determining, based on a preset rule, a target node server to which the cache content corresponding to the scaling information is to be moved includes:
obtaining cache contents stored in a node server corresponding to the scaling information;
determining the priority of the node servers in the second consistent Haichg diagram to the scaling information based on a preset rule;
and taking the node server with the highest priority in the node servers without adjustment as a target node server to which the cache content corresponding to the scaling information is to be moved.
In one embodiment, if at least 2 node servers which do not need to be adjusted have the same priority for the scaling information, the scaling information is uniformly stored into a plurality of target node servers with the highest priority by taking the number of files as a unit.
Further, if the files corresponding to the scaling information cannot be uniformly stored in a plurality of target node servers with the highest priority, the files which are uniformly divided and are excessive are uniformly stored in a plurality of target node servers with the front numbers according to the numbers of the target node servers.
In a second aspect, a distributed cache capacity expansion and reduction system according to an embodiment of the present application is provided, including:
the consistent hash map generation module is used for generating an adjusted second consistent hash map according to a first consistent hash map before adjustment and the capacity expansion and reduction information after receiving the capacity expansion and reduction information aiming at the target memory;
the target node server determining module is used for acquiring the cache content corresponding to the scaling information and determining a target node server to which the cache content corresponding to the scaling information is to be moved based on a preset rule;
and the capacity expansion and reduction module is used for performing capacity expansion and reduction processing on the distributed cache according to the capacity expansion and reduction information and storing cache contents corresponding to the capacity expansion and reduction information into a target node server determined according to the second consistent hash map.
In one embodiment, the system further comprises:
the transfer storage module is used for storing the cache content corresponding to the expansion and contraction content information to a transfer node server;
the transit storage module is used for taking out cache contents corresponding to the expansion and reduction capacity information from the transit node server after the expansion and reduction capacity processing is carried out on the distributed cache according to the expansion and reduction capacity information, and storing the cache contents into a target node server determined according to the second consistent hash map; the transit node server is a node server which exists in the first consistent Haiches graph and does not need to be adjusted.
In a third aspect, an apparatus for distributed cache scaling is provided according to an embodiment of the present application, where the apparatus includes a processor, a memory, and a computer program stored in the memory and executable on the processor, where the computer program is loaded and executed by the processor to implement any one of the distributed cache scaling methods.
In a fourth aspect, a computer-readable storage medium is provided according to an embodiment of the present application, where the computer program is stored in the computer-readable storage medium, and when executed by a processor, is configured to implement any one of the above-mentioned distributed cache scaling methods.
The beneficial effect of this application lies in:
according to the distributed cache capacity expansion and reduction method and device provided by the embodiment of the application, after capacity expansion and reduction information aiming at a target memory is received, an adjusted second consistent Hash diagram is generated according to a first consistent Hash diagram before adjustment and the capacity expansion and reduction information; obtaining cache contents corresponding to the expansion and contraction capacity information, and determining a target node server in a second consistency Hash diagram to which the cache contents corresponding to the expansion and contraction capacity information are moved based on a preset rule; and carrying out capacity expansion and reduction processing on the distributed cache according to the capacity expansion and reduction information, and storing cache contents corresponding to the capacity expansion and reduction information into the target node server. In the scheme, before the capacity of the cache server is expanded, an adjusted second consistent Hash diagram is generated according to a first consistent Hash diagram and the capacity expansion and reduction information, then a target node server in the second consistent Hash diagram corresponding to the capacity expansion and reduction information is determined based on a preset rule according to the cache content corresponding to the capacity expansion and reduction information, after the determination, the cache content corresponding to the capacity expansion and reduction information is copied to the target node server, the original node server of the cache content corresponding to the capacity expansion and reduction information is not changed, after user access information is received before the user access information is completely updated, the content in the corresponding node server is pulled based on the first consistent Hash diagram, the generation of invalid data in the updating process is effectively avoided, therefore, the phenomenon of large access quantity of invalid content is effectively avoided, and the technical scheme of bandwidth leap in a short time can not be caused, ensuring poor quality of service and better user experience.
The foregoing description is only an overview of the technical solutions of the present application, and in order to make the technical solutions of the present application more clear and clear, and to implement the technical solutions according to the content of the description, the following detailed description is made with reference to the preferred embodiments of the present application and the accompanying drawings.
Drawings
Fig. 1, fig. 3, and fig. 4 are respectively flowcharts of a distributed cache capacity expansion and reduction method according to an embodiment of the present application;
FIG. 2a is a first consistent Hash diagram provided in an embodiment of the present application;
fig. 2b and fig. 2c are respectively a second consistent haichg diagram determined after receiving a piece of capacity expansion and reduction information;
fig. 5 is a flowchart of sub-steps included in step S14 provided in an embodiment of the present application;
fig. 6 is a schematic diagram of a distributed cache scale-up system provided in an embodiment of the present application;
fig. 7 is a block diagram of a distributed cache scaling apparatus according to an embodiment of the present application.
Detailed Description
The following examples are intended to illustrate the present application but are not intended to limit the scope of the present application.
According to the distributed cache capacity expansion and reduction method, the distributed cache capacity expansion and reduction system, the distributed cache capacity expansion and reduction device and the storage medium, a consistency hash scheme in the existing CDN cache architecture is improved, preheating is performed in advance before predictable node server adjustment, the file access hit rate after the node server adjustment is guaranteed, the source return is reduced, and the service quality is guaranteed. According to the scheme, before expanding and reducing the distributed cache, an adjusted consistent Hash diagram is generated according to received expanding and reducing information, then cache contents of which the storage positions will change and node servers to which the change contents can be cached are determined, and the change cache is backed up to the existing node servers which do not need to be adjusted in advance; continuing, the capacity expansion and reduction node server migrates the backup content to a new storage position after capacity expansion and reduction; and finally, updating the hash distribution graph, and obtaining cache contents from the corresponding node server according to the updated distribution graph when a user accesses the node server.
An embodiment of the present application provides a method for expanding and reducing a capacity of a distributed cache, as shown in fig. 1, the method includes:
step S12, after receiving the capacity expansion and reduction information aiming at the target memory, generating an adjusted second consistent HahSh diagram according to the first consistent HahSh diagram before adjustment and the capacity expansion and reduction information;
in this embodiment of the present application, the target memory includes a plurality of node servers, each node server stores a corresponding file, and receiving the scaling information generally includes adding a node server or deleting a node server. And after receiving the expansion and contraction information, updating the first consistent Hash graph according to a command of deleting a node server or adding a node server contained in the expansion and contraction information to form an updated second consistent Hash graph.
Specifically, the first consistent hash map is shown in fig. 2a, that is, the first consistent hash map includes node server node1, node server node2 and node server node3, and the received scaling information is "delete node server node 2", then the adjusted second consistent hash map determined according to the scaling information and the first consistent hash map shown in fig. 2a is shown in fig. 2 b; for another example, if the received scaling information is "add node server node 4", then the adjusted second consistent hash map determined from the scaling information and the first consistent hash map shown in fig. 2a is shown in fig. 2 c.
Step S14, obtaining the cache content corresponding to the scaling information, and determining that the cache content corresponding to the scaling information will move to the target node server in the second consistency Hash diagram based on a preset rule;
in this embodiment of the application, after the second consistent hash map is determined, the target node server to which the content corresponding to the scaling information is to be moved is determined based on a preset rule, where the preset rule is based on the second consistent hash map generated in step S12 in advance, and specifically, the content corresponding to the scaling information is generally url information.
And step S16, performing capacity expansion and reduction processing on the distributed cache according to the capacity expansion and reduction information, and copying cache contents corresponding to the capacity expansion and reduction information to the target node server. In the embodiment of the application, before the second consistent hash map is not used for replacing the first consistent hash map, no change is made to the node server corresponding to the content corresponding to the scaling information, and in the process, when the user request information is received, the node server corresponding to the data/file requested by the user is searched according to the first consistent hash map. And after the second consistent hash map replaces the first consistent hash map, adding and deleting the node servers corresponding to the content corresponding to the scaling information.
In the embodiment of the present application, the capacity expansion processing is performed on the distributed cache according to the capacity expansion and reduction information, that is, if the capacity expansion information represents that capacity expansion is required, the capacity expansion processing is performed on the distributed cache, and conversely, if the capacity expansion information represents that capacity reduction is required, the capacity reduction processing is performed on the distributed cache.
In an embodiment of the present application, referring to fig. 3, the method further includes:
step S15, storing the cache content corresponding to the expansion and contraction content information to a transit node server;
in step S16, the expanding and contracting the distributed cache according to the expanded and contracted capacity information, and storing the cache content corresponding to the expanded and contracted capacity information in the target node server includes:
step S161, carrying out capacity expansion and reduction processing on the distributed cache according to the capacity expansion and reduction information;
step S162, the cache content corresponding to the expansion and contraction content information is taken out from the transit node server and copied to the target node server;
wherein the transit node server is a node server which exists in the first consistent Haiches diagram and does not need to be adjusted
In this embodiment of the present application, in order to ensure that the content is not lost, the cache content determined according to the scaling information may be stored in a transit node server, and after the scaling processing on the distributed cache is completed, the cache content corresponding to the scaling information is pulled to the target node server.
In an embodiment of the present application, referring to fig. 4, the method further includes:
step S17, updating the pre-stored first hash distribution map, so that when the user access information is received, the cache content corresponding to the user access information is called based on the updated second hash distribution map;
the user access information is received request information of the content in the access node server.
It is pointed out that the ultimate purpose of the distributed cache is still to determine cache contents in the node server corresponding to the user information according to the user information after receiving the user access information, and further push the cache contents corresponding to the user access information to the user side. In the application, after the capacity expansion and reduction processing is performed on the distributed cache, the pre-stored first hash distribution graph is updated to form a second hash distribution graph. After the current expansion and contraction processing and before the next expansion and contraction processing, after user access information is received, a node server where cache content corresponding to the access information is located is determined according to the second hash distribution diagram, and then the corresponding distributed cache content is pushed to the client.
In this embodiment of the present application, referring to fig. 5, in step S14, the obtaining of the cache content corresponding to the scaled content information, and determining, based on a preset rule, a target node server to which the cache content corresponding to the scaled content information is to be moved includes:
step S141, obtaining cache contents stored in the node server corresponding to the scaling information;
step S142, determining the priority of the node servers in the second consistent Hachig graph on the expansion and contraction capacity information based on a preset rule;
and S143, taking the node server with the highest priority in the second consistent Haiche diagram as a target node server to which the cache content corresponding to the expansion and contraction content information is to be moved.
As follows, a specific example is illustrated:
as directed to include: in a distributed cache formed by five node servers, namely, the node server S1, the node server S2, the node server S3, the node server S4 and the node server S5, the cache contents stored in the five node servers are respectively:
the node server S1 stores file1 and file 6;
the node server S2 stores file2 and file 7;
the node server S3 stores file3 and file 8;
the node server S4 stores file4 and file 9;
the node server S5 stores file5 and file 10;
at this time, if the obtained expansion and contraction volume information is the deleted node server S1, determining that the cache contents stored in the corresponding node server are file1 and file6 according to the expansion and contraction volume information; then, the priorities of the node server S2, the node server S3, the node server S4 and the node server S5 are determined for a second consistent hash map composed of the node server S2, the node server S3, the node server S4 and the node server S5 according to a preset rule, if the priority of the node server S2 is the highest, the node server S2 is determined to be a target node server, at this time, the hash ring accessed by the user is not changed, and if information for file1 or file6 accessed by the user is received, the information is acquired from the node server S1.
After determining that the node server S2 is the node server with the highest priority, copying the files 1 and 6 on the node server S1 to the node server S2, and setting the storage contents in the node servers in the distributed cache as:
S2:{file2,file7,file1,file6}
S3:{file3,file8}
S4:{file4,file9}
S5:{file5,file10}
after the content in the node server in the distributed cache is updated, the node is deleted S1, and the originally stored first consistent haichthy graph is replaced with the second consistent hash graph.
In the embodiment of the application, if at least 2 node servers have the same priority for the scalable information in the process of determining the priority of the node servers in the second consistent hah graph, the scalable information is uniformly stored into a plurality of target servers with the highest priority by taking the number of files as a unit.
For example, for the above embodiment, also for deleting the scaled-down content information of the node server S1, the priorities of the node server S2, the node server S3, the node server S4 and the node server S5 are determined for the second consistent hash map composed of the node server S2, the node server S3, the node server S4 and the node server S5 according to the preset rule, and if the priorities of the node server S2 and the node server S3 are the same and are the highest, it is determined that the node server S2 and the node server S3 are the target node servers, at this time, the user access hash ring is not changed, and if the user access information for the file1 or the file6 is received, the information is obtained from the node server S1.
After determining that the node server S2 and the node server S2 are both the node servers with the highest priority, file1 and file6 on the node server S1 are uniformly copied to the node server S2 and the node server S2, and the storage content in the node servers in the distributed cache is changed to be:
S2:{file2,file7,file1}
S3:{file3,file8,file6}
S4:{file4,file9}
S5:{file5,file10}
in this embodiment of the application, if the file corresponding to the scalable capacity information cannot be uniformly stored in the plurality of target node servers with the highest priority, the files that are more than evenly divided are uniformly stored in the plurality of target node servers with the top numbers according to the numbers of the target node servers.
Similarly, in the embodiment described in the foregoing scenario, still for deleting the expanded/contracted content information of the node server S1, the priorities of the node server S2, the node server S3, the node server S4, and the node server S5 are determined for the second consistent hash map composed of the node server S2, the node server S3, the node server S4, and the node server S5 according to the preset rule, if the priorities of the node server S2, the node server S3, and the node server S4 are the same and are the highest, it is determined that the node server S2, the node server S3, and the node server S4 are target node servers, and at this time, the user access hash ring is not changed, and if the information of the user access file1 or file6 is received, the information is obtained from the node server S1.
After determining that the node server S2, the node server S3 and the node server S4 are all the node servers with the highest priority, the reason is that
Only 2 files needing to be stored are needed, and the number of the determined node servers is 3, then the files 1 and 6 on the node server S1 are uniformly copied to the node server S2 and the node server S3 according to the serial numbers of the node servers, the node server S4 cannot store data at this time due to the fact that the serial numbers of the node servers are relatively close, and the storage content in the node servers in the distributed cache is changed into:
S2:{file2,file7,file1}
S3:{file3,file8,file6}
S4:{file4,file9}
S5:{file5,file10}
as follows, a specific example is further illustrated:
assuming that capacity expansion information is received, for example, the node server S6 is added for the above scenario, the node server S6 is already deployed and not really online.
(1) Traversing all tables on the redis to obtain files 1-10
(2) Hash calculation for 6 node servers (S1, S2, S3, S4, S5, S6), assuming that the calculated hash (file1) is S6 hash (file7) is S6, the other hash results are unchanged.
Note that at this point, the access hash ring has not changed, and the user has accessed file1 or read onto S1;
(3) s6 pulls files 1 and 7 from S1 and S2 respectively, and updates the data table as:
S1:{file6}
S2:{file2}
S3:{file3,file8}
S4:{file4,file9}
S5:{file5,file10}
S6:{file1,file7}
(4) formally going to the line S6, replacing the first consistent Hash diagram with the second consistent Hash diagram, and then: accessing file1 again, file7 would map directly to S6, and the file would be retrieved from S6 correctly.
To sum up, according to the distributed cache capacity expansion and reduction method and device provided by the embodiment of the present application, after receiving capacity expansion and reduction information for a target memory, an adjusted second consistent haichthy graph is generated according to a first consistent haichthy graph before adjustment and the capacity expansion and reduction information; obtaining cache contents corresponding to the expansion and contraction capacity information, and determining a target node server in a second consistency Hash diagram to which the cache contents corresponding to the expansion and contraction capacity information are moved based on a preset rule; and carrying out capacity expansion and reduction processing on the distributed cache according to the capacity expansion and reduction information, and storing cache contents corresponding to the capacity expansion and reduction information into the target node server. In the scheme, before capacity expansion and capacity reduction of the cache node server, firstly, an adjusted second consistency Hash diagram is generated according to a first consistency Hash diagram and the capacity expansion and reduction information, then, according to the cache content corresponding to the capacity expansion and reduction information, a target node server in the second consistency Hash diagram corresponding to the capacity expansion and reduction information is determined based on a preset rule, after the determination, the cache content corresponding to the capacity expansion and reduction information is copied to the target node server, the original node server of the cache content corresponding to the capacity expansion and reduction information is not changed, after user access information is received before complete updating, the content in the corresponding node server is pulled out based on the first consistency Hash diagram, the generation of failure data in the updating process is effectively avoided, therefore, the phenomenon of large access quantity of failure content is effectively avoided, and the technical scheme of bandwidth leap in a short time can not be caused, ensuring poor quality of service and better user experience.
Fig. 6 is a block diagram of a distributed cache scale-up system according to an embodiment of the present application, where the apparatus includes at least the following modules:
a consistent hash map generation module 61, configured to generate, after receiving the scaling information for the target memory, an adjusted second consistent hash map according to the first consistent hash map before adjustment and the scaling information;
a target node server determining module 62, configured to obtain cache content corresponding to the scaling information, and determine, based on a preset rule, a target node server to which the cache content corresponding to the scaling information will move;
and the capacity expansion and reduction module 63 is configured to perform capacity expansion and reduction processing on the distributed cache according to the capacity expansion and reduction information, and copy cache contents corresponding to the capacity expansion and reduction information to a target node server determined according to the second consistent hash map.
In one embodiment, the distributed cache scale and reduction system further includes:
the transfer storage module is used for storing the cache content corresponding to the expansion and contraction content information to a transfer node server;
the transit storage module is used for taking out cache contents corresponding to the expansion and reduction capacity information from the transit node server after the expansion and reduction capacity processing is carried out on the distributed cache according to the expansion and reduction capacity information, and storing the cache contents into a target node server determined according to the second consistent hash map; the transit node server is a node server which exists in the first consistent Haiches graph and does not need to be adjusted.
The distributed cache capacity expansion and reduction system provided by the embodiment of the application can be used in the distributed cache capacity expansion and reduction method in the above embodiment, and the implementation principle and the technical effect are similar for the relevant details with reference to the above method embodiment, and are not described again here.
It should be noted that: the distributed cache capacity expansion and reduction method and the distributed cache capacity expansion and reduction device provided by the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
Fig. 7 is a block diagram of a distributed cache scaling apparatus according to an embodiment of the present disclosure, where the distributed cache scaling apparatus may be a desktop computer, a notebook computer, a palm top computer, a cloud node server, and other computing devices, and the apparatus may include, but is not limited to, a processor and a memory. The distributed cache capacity expansion and reduction device in this embodiment at least includes a processor and a memory, where the memory stores a computer program, the computer program may run on the processor, and when the processor executes the computer program, the steps in the embodiment of the distributed cache capacity expansion and reduction method are implemented, for example, the steps of the distributed cache capacity expansion and reduction method shown in any one of fig. 1, fig. 3, and fig. 4. Or, when the processor executes the computer program, the functions of the modules in the embodiment of the distributed cache scale-up system are realized.
Illustratively, the computer program may be partitioned into one or more modules that are stored in the memory and executed by the processor to implement the invention. The one or more modules may be a series of computer program instruction segments capable of performing specific functions, and the instruction segments are used for describing the execution process of the computer program in the distributed cache scaling device. For example, the computer program may be divided into a transmitting module and a first receiving module, and the specific functions of each module are as follows:
the consistent hash map generation module is used for generating an adjusted second consistent hash map according to a first consistent hash map before adjustment and the capacity expansion and reduction information after receiving the capacity expansion and reduction information aiming at the target memory;
the target node server determining module is used for acquiring the cache content corresponding to the scaling information and determining a target node server to which the cache content corresponding to the scaling information is to be moved based on a preset rule;
and the capacity expansion and reduction module is used for performing capacity expansion and reduction processing on the distributed cache according to the capacity expansion and reduction information and storing cache contents corresponding to the capacity expansion and reduction information into a target node server determined according to the second consistent hash map.
The processor may include one or more processing cores, such as: 4 core processors, 6 core processors, etc. The processor may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning. The processor is a control center of the distributed cache scaling device and is connected with each part of the whole distributed cache scaling device by various interfaces and lines.
The memory may be configured to store the computer program and/or the module, and the processor may implement various functions of the distributed cache scaling apparatus by executing or executing the computer program and/or the module stored in the memory and calling data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a memory device, or other volatile solid state storage device.
It is understood by those skilled in the art that the apparatus described in this embodiment is only an example of a distributed cache scaling apparatus, and does not form a limitation to the distributed cache scaling apparatus, and in other embodiments, more or fewer components may be included, or some components may be combined, or different components may be included, for example, the distributed cache scaling apparatus may further include an input/output device, a network access device, a bus, and the like. The processor, memory and peripheral interface may be connected by bus or signal lines. Each peripheral may be connected to the peripheral interface by a bus, signal line, or circuit board. Illustratively, peripheral devices include, but are not limited to: radio frequency circuit, touch display screen, audio circuit, power supply, etc.
Of course, the distributed cache scaling apparatus may also include fewer or more components, which is not limited in this embodiment.
Optionally, the present application further provides a computer-readable storage medium, which stores a computer program, and the computer program is used for implementing the steps of the distributed cache scaling method when being executed by a processor.
Optionally, the present application further provides a computer product, which includes a computer-readable storage medium, where a program is stored in the computer-readable storage medium, and the program is loaded and executed by a processor to implement the steps of the above-mentioned embodiment of the distributed cache scaling method.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A distributed cache capacity expansion and reduction method is characterized by comprising the following steps:
after receiving the capacity expansion and reduction information aiming at the target memory, generating an adjusted second consistent Hash diagram according to the first consistent Hash diagram before adjustment and the capacity expansion and reduction information;
obtaining cache contents corresponding to the expansion and contraction capacity information, and determining a target node server in a second consistency Hash diagram to which the cache contents corresponding to the expansion and contraction capacity information are moved based on a preset rule;
and carrying out capacity expansion and reduction processing on the distributed cache according to the capacity expansion and reduction information, and copying cache contents corresponding to the capacity expansion and reduction information to the target node server.
2. The distributed cache capacity expansion and reduction method according to claim 1, further comprising:
storing the cache content corresponding to the expansion and contraction content information to a transit node server;
after carrying out capacity expansion and reduction processing on the distributed cache according to the capacity expansion and reduction information, taking out cache contents corresponding to the capacity expansion and reduction information from the transit node server and storing the cache contents into a target node server; the transit node server is a node server which exists in the first consistent Haiches graph and does not need to be adjusted.
3. The distributed cache capacity expansion and reduction method according to claim 1 or 2, further comprising:
updating a first pre-stored hash distribution map so as to call cache content corresponding to user access information based on an updated second hash distribution map when the user access information is received;
the user access information is the received request information of the content in the azimuth node server.
4. The distributed cache capacity expansion and reduction method according to claim 1 or 2, wherein the obtaining of the cache contents corresponding to the capacity expansion and reduction information and the determining of the target node server to which the cache contents corresponding to the capacity expansion and reduction information are to be moved based on a preset rule include:
obtaining cache contents stored in a node server corresponding to the scaling information;
determining the priority of the node server in the second consistent Haichg diagram for the scaling information based on a preset rule;
and taking the node server with the highest priority in the node servers without adjustment as a target node server to which the cache content corresponding to the scaling information is to be moved.
5. The distributed cache capacity expansion and reduction method according to claim 4, wherein if there are at least 2 node servers that do not need to be adjusted and have the same priority for the capacity expansion and reduction information, the capacity expansion and reduction information is uniformly stored into a plurality of target node servers with the highest priority by taking the number of files as a unit.
6. The distributed cache capacity expansion and reduction method according to claim 5, wherein if the file corresponding to the capacity expansion information cannot be uniformly stored in the target node servers with the highest priority, the file that is added after the division is uniformly stored in the target node servers with the top number according to the number of the target node servers.
7. A distributed cache capacity expansion and reduction system is characterized by comprising:
the consistent hash map generation module is used for generating an adjusted second consistent hash map according to a first consistent hash map before adjustment and the capacity expansion and reduction information after receiving the capacity expansion and reduction information aiming at the target memory;
the target node server determining module is used for acquiring the cache content corresponding to the scaling information and determining a target node server to which the cache content corresponding to the scaling information is to be moved based on a preset rule;
and the capacity expansion and reduction module is used for performing capacity expansion and reduction processing on the distributed cache according to the capacity expansion and reduction information and copying cache contents corresponding to the capacity expansion and reduction information into a target node server determined according to the second consistent hash map.
8. The distributed cache scaling system of claim 7, further comprising:
the transfer storage module is used for storing the cache content corresponding to the expansion and contraction content information to a transfer node server;
the transit storage module is used for taking out cache contents corresponding to the expansion and reduction capacity information from the transit node server after the expansion and reduction capacity processing is carried out on the distributed cache according to the expansion and reduction capacity information, and storing the cache contents into a target node server determined according to the second consistent hash map; the transit node server is a node server which exists in the first consistent Haiches graph and does not need to be adjusted.
9. A distributed cache scaling apparatus comprising a processor, a memory, and a computer program stored in the memory and executable on the processor, wherein the computer program is loaded and executed by the processor to implement the distributed cache scaling method according to any one of claims 1 to 6.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, is adapted to carry out the distributed cache scaling method according to any one of claims 1 to 6.
CN202111614757.6A 2021-12-27 2021-12-27 Distributed cache capacity expansion and reduction method, system, device and storage medium Pending CN114500576A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111614757.6A CN114500576A (en) 2021-12-27 2021-12-27 Distributed cache capacity expansion and reduction method, system, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111614757.6A CN114500576A (en) 2021-12-27 2021-12-27 Distributed cache capacity expansion and reduction method, system, device and storage medium

Publications (1)

Publication Number Publication Date
CN114500576A true CN114500576A (en) 2022-05-13

Family

ID=81496159

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111614757.6A Pending CN114500576A (en) 2021-12-27 2021-12-27 Distributed cache capacity expansion and reduction method, system, device and storage medium

Country Status (1)

Country Link
CN (1) CN114500576A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105744001A (en) * 2016-04-11 2016-07-06 青岛海信传媒网络技术有限公司 Distributed Caching System Expanding Method, Data Access Method, and Device and System of the Same
CN109683826A (en) * 2018-12-26 2019-04-26 北京百度网讯科技有限公司 Expansion method and device for distributed memory system
CN110874384A (en) * 2018-09-03 2020-03-10 阿里巴巴集团控股有限公司 Database cluster capacity expansion method, device and system
CN112511634A (en) * 2020-12-02 2021-03-16 北京邮电大学 Data acquisition method and device, electronic equipment and storage medium
CN113239011A (en) * 2021-05-11 2021-08-10 京东数字科技控股股份有限公司 Database capacity expansion method, device and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105744001A (en) * 2016-04-11 2016-07-06 青岛海信传媒网络技术有限公司 Distributed Caching System Expanding Method, Data Access Method, and Device and System of the Same
CN110874384A (en) * 2018-09-03 2020-03-10 阿里巴巴集团控股有限公司 Database cluster capacity expansion method, device and system
CN109683826A (en) * 2018-12-26 2019-04-26 北京百度网讯科技有限公司 Expansion method and device for distributed memory system
CN112511634A (en) * 2020-12-02 2021-03-16 北京邮电大学 Data acquisition method and device, electronic equipment and storage medium
CN113239011A (en) * 2021-05-11 2021-08-10 京东数字科技控股股份有限公司 Database capacity expansion method, device and system

Similar Documents

Publication Publication Date Title
CN108810041B (en) Data writing and capacity expansion method and device for distributed cache system
US20140379656A1 (en) System and Method for Maintaining a Cluster Setup
JP2019519025A (en) Division and movement of ranges in distributed systems
CN111475483B (en) Database migration method and device and computing equipment
WO2022156650A1 (en) Data access method and apparatus
US10198462B2 (en) Cache management
CN110389859B (en) Method, apparatus and computer program product for copying data blocks
US10346256B1 (en) Client side cache for deduplication backup systems
US11341009B1 (en) Directing placement of data in cloud storage nodes
US11288237B2 (en) Distributed file system with thin arbiter node
CN111143113B (en) Method, electronic device and computer program product for copying metadata
US9852139B1 (en) Directory partitioning with concurrent directory access
US11256434B2 (en) Data de-duplication
US9575679B2 (en) Storage system in which connected data is divided
CN114500576A (en) Distributed cache capacity expansion and reduction method, system, device and storage medium
CN116233254A (en) Business cut-off method, device, computer equipment and storage medium
WO2020151337A1 (en) Distributed file processing method and apparatus, computer device and storage medium
US20190227991A1 (en) Synchronizing Different Representations of Content
CN106407320B (en) File processing method, device and system
CN117112508B (en) File synchronization method and device based on serial numbers, computer equipment and storage medium
CN113486040B (en) Data storage method, device, equipment and medium
CN111597144B (en) File migration method, device and equipment
US20230350857A1 (en) Data replication using synthetic relationships between files
JP6676203B2 (en) Management device, storage system, storage management method, and program
US10747729B2 (en) Device specific chunked hash size tuning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination