CN111917853A - Optimization method for distributed cache scaling of content distribution network - Google Patents
Optimization method for distributed cache scaling of content distribution network Download PDFInfo
- Publication number
- CN111917853A CN111917853A CN202010724041.0A CN202010724041A CN111917853A CN 111917853 A CN111917853 A CN 111917853A CN 202010724041 A CN202010724041 A CN 202010724041A CN 111917853 A CN111917853 A CN 111917853A
- Authority
- CN
- China
- Prior art keywords
- cache
- hash
- cache server
- server
- client
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/568—Storing data temporarily at an intermediate stage, e.g. caching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/13—File access structures, e.g. distributed indices
- G06F16/137—Hash-based
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/17—Details of further file system functions
- G06F16/172—Caching, prefetching or hoarding of files
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/18—File system types
- G06F16/182—Distributed file systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
- H04L67/63—Routing a service request depending on the request content or context
Abstract
The invention provides an optimization method for distributed cache expansion and contraction capacity of a content distribution network, which can ensure the effectiveness of the existing cache as far as possible when a distributed cache server cluster expands, contracts and adjusts weight. Which comprises the following steps: s1, setting a client, and sending a request file by the client; s2, using a calling service; s3, generating a cache server consistency Hash distribution graph; s4, generating a file path hash value requested by the client; s5, searching the position of the file path hash value in the cache server consistency hash distribution graph; s6, determining which cache server is used according to the position of the hash value; s7, whether the files in the cache server hit the cache or not is judged, if yes, the step S8 is executed, and if not, the step S9 is executed; s8, responding the file request of the client, S9, returning the source from the source station and locally caching in the cache server, and then executing the step S8.
Description
Technical Field
The invention relates to an optimization method for distributed cache expansion and contraction capacity of a content distribution network, and belongs to the technical field of networks.
Background
For a cache server, there are usually several features: the user access flow is large; the back source bandwidth is limited; the cache space of the cache server is limited, and the full storage of resources cannot be guaranteed. Based on the characteristics, the cache server only stores some hot resources. The more content the cache server caches, the easier it is for the client to hit the requested file download, thereby reducing the traffic back to the source station. When the number and storage capacity of the cache servers are fixed, how to cache as many hot files as possible becomes a key for improving the cache utilization efficiency. The best method for solving the problems is to enable each server in the cache server cluster to cache different files, so as to avoid caching repeated files. There are generally two approaches to avoid duplication: firstly, establishing an index dictionary to distribute the corresponding relation between files and a cache server; and secondly, determining the relation between the file and the cache server according to the file path hash. The index dictionary established by the former method consumes a great deal of time and calculation cost when being used for query. The invention randomly distributes the files in the cache server through the hash of the file path, thereby realizing the construction of the distributed cache server with non-repeated files. However, when the distributed cache is subjected to capacity expansion, the traditional hash distribution can redistribute files, and finally, a large amount of cache files in the server are invalidated. Once the cache is largely invalidated, the back source bandwidth is increased sharply in a short time, and the service capability of the content distribution network is reduced.
Disclosure of Invention
The invention aims to provide an optimization method for distributed cache expansion and contraction capacity of a content distribution network, which can ensure the effectiveness of the existing cache as far as possible when a distributed cache server cluster expands, contracts and adjusts weight.
In order to achieve the purpose, the invention is realized by the following technical scheme:
a method for optimizing distributed cache scaling for a content distribution network, comprising the steps of:
s1, setting a client, and sending a request file by the client;
s2, using a calling service;
s3, generating a cache server consistency Hash distribution graph;
s4, generating a file path hash value requested by the client;
s5, searching the position of the file path hash value in the cache server consistency hash distribution graph;
s6, determining which cache server is used according to the position of the hash value;
s7, whether the files in the cache server hit the cache or not is judged, if yes, the step S8 is executed, and if not, the step S9 is executed;
s8, responding to a file request of the client;
s9, returning the source from the source station and locally caching the source in a cache server, and then executing the step S8.
On the basis of the optimization method for the distributed cache expansion capacity of the content distribution network, the method for generating the cache server consistency hash distribution diagram comprises the following steps:
s301, setting a plurality of virtual node names for a cache server;
s302, calculating 32-bit hash values for the virtual nodes of the servers, wherein the specific algorithm is as follows:
hash = 32-bit FNV hash initial value
Numerical value for each byte in a string
hash = value of hash xor byte 32-bit FNV hash prime number
Returning to the hash; the value obtained by hashing the hash algorithm is an integer value in the range of 0-4,294,967,295;
and S303, distributing the integer values in a line segment with the length of 2 to the power of 32 to form a server hash distribution graph.
Based on the optimization method for the distributed cache expansion capacity of the content distribution network, the server is called to use Keepaived or NLB software.
On the basis of the optimization method for the distributed cache expansion capacity of the content distribution network, the reverse proxy and the cache are realized by the Nginx software through the back source cache function of the cache server.
The invention has the advantages that: the consistency hash is used for distributing the cache files, after the distributed cache server cluster is subjected to capacity expansion or after the weight proportion among the cache servers is adjusted, most caches can be guaranteed not to lose efficacy, and therefore the source returning bandwidth of the cache servers and the service efficiency of the cache disks are optimized.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention.
FIG. 1 is a schematic diagram of a cache server consistent hash distribution according to the present invention.
Fig. 1-1 is a schematic diagram of the initial distribution states of the cache servers s1, s2, and s3.
Fig. 1-2 are schematic diagrams of the distribution status of the cache server s3 after removal.
Fig. 1-3 are schematic diagrams illustrating distribution states after a cache server s4 is added.
Fig. 1-4 are schematic diagrams of the distribution status of the cache server s3 after doubling the weight.
Fig. 2 is a flowchart of the operation of the distributed cache server.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to optimize the distributed cache in the content distribution network, and ensure the effectiveness of the existing cache as much as possible when expanding the capacity, contracting the capacity and adjusting the weight, the invention provides an optimization method for expanding the capacity of the distributed cache in the content distribution network, which comprises the following steps:
s1, setting a client, and sending a request file by the client;
s2, using a calling service in an edge node cache server of the content distribution network, wherein the calling service can be an independent server and can be a service shared on the cache server, the calling service uses Kespactive software or NLB and other similar software to realize high availability, but does not need to establish a session sharing function for the service, a plurality of calling services can be established in a distributed cache server cluster, and load balancing is realized for the calling services by LVS and other software;
s3, generating a cache server consistency Hash distribution graph;
s4, generating a file path hash value requested by the client;
s5, searching the position of the file path hash value in the cache server consistency hash distribution graph;
s6, determining which cache server is used according to the position of the hash value;
s7, whether the files in the cache server hit the cache or not is judged, if yes, the step S8 is executed, and if not, the step S9 is executed;
s8, responding to a file request of the client;
s9, returning the source from the source station and locally caching the source in a cache server, and then executing the step S8.
The method for generating the cache server consistency hash distribution diagram comprises the following steps:
s301, setting a plurality of virtual node names for a cache server;
s302, calculating 32-bit hash values for the virtual nodes of the servers, wherein the specific algorithm is as follows:
hash = 32-bit FNV hash initial value
Numerical value for each byte in a string
hash = value of hash xor byte 32-bit FNV hash prime number
Return tohash; the value computed by the hash algorithm hash is 0-4,294,967,295 (i.e., 2)23-1) a range of integer values;
and S303, distributing the integer values in a line segment with the length of 2 to the power of 32 to form a server hash distribution graph.
DETAILED DESCRIPTION OF EMBODIMENT (S) OF INVENTION
Taking fig. 1 as an example, assume that there are three cache servers (denoted as s1, s2, s3, respectively) in the distributed cluster, and each server is set to 4 virtual nodes (denoted as n1, n2, n3, n4, respectively), and there are 12 virtual nodes in total (their names are: s1n1, s1n2, s1n3, s1n4, s2n1, s2n2, s2n3, s2n4, s3n1, s3n2, s3n3, s3n4, respectively); after the names of the 12 virtual nodes are subjected to hash calculation by using the 32-bit FNV-1a, 12 integers can be obtained, and the integers are all in the range of 0-4,294,967,295; as shown in fig. 1-1, the range of the line segment is 0-4,294,967,295, and the integer of the 12 server virtual nodes is placed at the corresponding position in the line segment; each integer point extends right to the next integer point (the leftmost 0 to s1n3 segments merge into the s1n3 to s2n2 segments) forming 1 closed segment, which will eventually form a hash distribution map containing 12 segments.
When the scheduling system of the cache server cluster allocates the cache file, an integer is generated for a path of the target file by using the 32-bit FNV-1a hash calculation, and then a line segment where the target file is located is found according to the generated cache server hash distribution diagram, so that the cache server which needs to cache the target file is located.
Based on the above example, how the scheduling service maintains the consistency of the original cache distribution as much as possible when the distributed cache server cluster has the conditions of capacity reduction, capacity expansion and weight adjustment is explained respectively next.
On the basis of the distribution of fig. 1-1, when a cache server cluster capacity reduction situation occurs, it is assumed here that the cache server 3 is removed. As shown in fig. 1-2, all 4 virtual node integers (s 3n1, s3n2, s3n3, s3n 4) originally belonging to the cache server 3 in the figure are removed, and they are marked as gray in fig. 1-2-s 3n1, -s3n2, -s3n3, -s3n 4. Then, after each integral point is expanded to the next integral point, the original server 1 and partial virtual nodes of the server 2 will extend to cover the distribution position of the original cache server 3 (the content of the change is the gray part in fig. 1-2). After this adjustment, the distribution position of the original cache server 3 is replaced by a part of each of the cache servers 1 and 2. Eventually it appears that the original cache files in cache servers 1 and 2 are still valid, and they also take over 3 partial cache files of cache server. Therefore, in a capacity reduction scene, the effectiveness of the original cache file can be ensured to the greatest extent, and load balancing can be realized.
On the basis of the distribution of fig. 1-1, when the cache server cluster expansion occurs, it is assumed that one cache server 4 is newly added. As shown in FIGS. 1-3, 4 virtual node integers (s 4n1, s4n2, s4n3, s4n 4) are randomly generated for the server 4 between 0-4,294, 967,295, and are labeled as + s4n1, + s4n2, + s4n3, + s4n4, which are gray in FIGS. 1-3. Then, after each integral point is expanded to the next integral point again, part of the virtual nodes of the original server is occupied by the virtual nodes of the newly added server 4 (the changed content is the gray part of fig. 1-3). After the adjustment, the newly added server 4 segments occupy a part of the distribution line segments of the original three cache servers. Finally, a part of cache files in the original cache servers are invalid, and the invalid cache files are borne by the newly added cache server 4. Therefore, under the condition of capacity expansion, the failure rate of the original cache file can be reduced to the greatest extent, and load balance can be realized.
On the basis of the distribution of fig. 1-1, when a situation that the load weight is adjusted by the cache server cluster occurs, it is assumed here that the weight of the cache server 3 is doubled, and the number of virtual nodes of the server 3 is increased from 4 to 8. As shown in FIGS. 1-4, 4 new virtual node integers (s 3n5, s3n6, s3n7, s3n 8) are additionally randomly generated for the server 3 between 0-4,294,967,295, which are labeled as + s3n5, + s3n6, + s3n7, + s3n8 in gray in FIGS. 1-4. Then, after each integral point is expanded to the next integral point, part of the virtual nodes of the original server is occupied by the weighted virtual nodes of the server 3 (the changed content is the gray part in fig. 1-4). After this adjustment, the server 3 with the increased weight occupies a part of the distribution line segment of the original server. Finally, the original cache servers 1 and 2 respectively have a part of cache files, and are allocated as the cache server 3 for bearing. Therefore, under the condition of adjusting the load weight, the failure rate of the original cache file can be reduced to the greatest extent, and load balancing can be realized.
The present invention is described in detail with reference to fig. 2.
1. It is necessary to establish a highly available dispatch service, which can be realized by using Keepalived or NLB and similar software, and load balance between dispatch servers.
2. When a client uses a cache server cluster, all requests of the client need to pass through a scheduling server, the scheduling server generates a corresponding number of virtual nodes for the cache servers according to the number and weight of the existing cache servers in the cluster, calculates a hash integral value of each node through a 32-bit FNV-1a hash algorithm, and finally distributes the integral values in a line segment with the length of 2 to the power of 32 to form a server hash distribution map.
3. Based on the hash distribution diagram, the scheduling server calculates a corresponding hash integer value through a 32-bit FNV-1a hash algorithm according to a file path requested by the client, and then searches the position of the file path hash value in the hash distribution diagram of the cache server, so as to determine which cache server the file requested by the client should provide service.
4. The cache server needs to complete a series of operations such as checking whether a cache file exists, returning to a source if necessary, caching after returning to a source and the like, and can provide a downloading service for the client, wherein the source returning and caching functions of the cache server can be completed by high-performance services such as Nginx and the like.
When the method of the invention is adopted, when the distributed cache server cluster expands, contracts and adjusts the weight, the consistent Hash distribution among the servers can ensure that most cached files cannot be invalid.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (4)
1. A method for optimizing distributed cache expansion capacity of a content distribution network is characterized by comprising the following steps:
s1, setting a client, and sending a request file by the client;
s2, using a calling service;
s3, generating a cache server consistency Hash distribution graph;
s4, generating a file path hash value requested by the client;
s5, searching the position of the file path hash value in the cache server consistency hash distribution graph;
s6, determining which cache server is used according to the position of the hash value;
s7, whether the files in the cache server hit the cache or not is judged, if yes, the step S8 is executed, and if not, the step S9 is executed;
s8, responding to a file request of the client;
s9, returning the source from the source station and locally caching the source in a cache server, and then executing the step S8.
2. The method for optimizing distributed cache expansion capacity for a content distribution network according to claim 1, wherein the method for generating the cache server consistent hash distribution map is as follows:
s301, setting a plurality of virtual node names for a cache server;
s302, calculating 32-bit hash values for the virtual nodes of the servers, wherein the specific algorithm is as follows:
hash = 32-bit FNV hash initial value
Numerical value for each byte in a string
hash = value of hash xor byte 32-bit FNV hash prime number
Returning to the hash; the value obtained by hashing the hash algorithm is an integer value in the range of 0-4,294,967,295;
and S303, distributing the integer values in a line segment with the length of 2 to the power of 32 to form a server hash distribution graph.
3. The optimization method for distributed cache scalability for content distribution networks according to claim 1, wherein the invocation server uses Keepalived or NLB software.
4. The optimization method for distributed cache scalability for content distribution networks according to claim 1, 2 or 3, characterized in that: the back source cache function of the cache server is realized by Nginx software to realize reverse proxy and cache.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010724041.0A CN111917853A (en) | 2020-07-24 | 2020-07-24 | Optimization method for distributed cache scaling of content distribution network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010724041.0A CN111917853A (en) | 2020-07-24 | 2020-07-24 | Optimization method for distributed cache scaling of content distribution network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111917853A true CN111917853A (en) | 2020-11-10 |
Family
ID=73280796
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010724041.0A Pending CN111917853A (en) | 2020-07-24 | 2020-07-24 | Optimization method for distributed cache scaling of content distribution network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111917853A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112650451A (en) * | 2020-12-28 | 2021-04-13 | 杭州趣链科技有限公司 | Optimization method and device for searching network server, computer equipment and storage medium |
CN113852643A (en) * | 2021-10-21 | 2021-12-28 | 西安电子科技大学 | Content distribution network cache pollution defense method based on content popularity |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107197035A (en) * | 2017-06-21 | 2017-09-22 | 中国民航大学 | A kind of compatibility dynamic load balancing method based on uniformity hash algorithm |
CN108124012A (en) * | 2017-12-21 | 2018-06-05 | 中通服公众信息产业股份有限公司 | A kind of distributed caching computational methods based on hash algorithm |
CN111177154A (en) * | 2019-12-27 | 2020-05-19 | 掌迅亿通(北京)信息科技有限公司 | Distributed database caching method and hash ring optimization thereof |
-
2020
- 2020-07-24 CN CN202010724041.0A patent/CN111917853A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107197035A (en) * | 2017-06-21 | 2017-09-22 | 中国民航大学 | A kind of compatibility dynamic load balancing method based on uniformity hash algorithm |
CN108124012A (en) * | 2017-12-21 | 2018-06-05 | 中通服公众信息产业股份有限公司 | A kind of distributed caching computational methods based on hash algorithm |
CN111177154A (en) * | 2019-12-27 | 2020-05-19 | 掌迅亿通(北京)信息科技有限公司 | Distributed database caching method and hash ring optimization thereof |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112650451A (en) * | 2020-12-28 | 2021-04-13 | 杭州趣链科技有限公司 | Optimization method and device for searching network server, computer equipment and storage medium |
CN113852643A (en) * | 2021-10-21 | 2021-12-28 | 西安电子科技大学 | Content distribution network cache pollution defense method based on content popularity |
CN113852643B (en) * | 2021-10-21 | 2023-11-14 | 西安电子科技大学 | Content distribution network cache pollution defense method based on content popularity |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10911530B2 (en) | Content delivery method, virtual server management method, cloud platform, and system | |
CN108551474B (en) | Load balancing method of server cluster | |
US7127513B2 (en) | Method and apparatus for distributing requests among a plurality of resources | |
JP5901024B2 (en) | Dynamic binding used for content delivery | |
US20070050475A1 (en) | Network memory architecture | |
CN111338806B (en) | Service control method and device | |
CN111917853A (en) | Optimization method for distributed cache scaling of content distribution network | |
CN104811493A (en) | Network-aware virtual machine mirroring storage system and read-write request handling method | |
US11140220B1 (en) | Consistent hashing using the power of k choices in server placement | |
CN111177154A (en) | Distributed database caching method and hash ring optimization thereof | |
CN106156255A (en) | A kind of data buffer storage layer realization method and system | |
Ling et al. | CDN cloud: A novel scheme for combining CDN and cloud computing | |
CN111159193A (en) | Multi-layered consistent hash ring and its application in creating distributed database | |
CN105007328A (en) | Network cache design method based on consistent hash | |
CN103226520B (en) | Self-adaptive cluster memory management method, server cluster system | |
Sundarrajan et al. | Midgress-aware traffic provisioning for content delivery | |
Zhang et al. | A multi-level cache framework for remote resource access in transparent computing | |
Shoaib et al. | Fast Data Access through Nearest Location-Based Replica Placement | |
CN117057799B (en) | Asset data processing method, device, equipment and storage medium | |
CN113271215B (en) | Network hierarchical control method, device and storage medium | |
WO2021166249A1 (en) | Communication device, communication method, and program | |
Bok et al. | Load Balancing with Load Threshold Adjustment in Structured P2P | |
Kim et al. | Delay-aware distributed caching scheme in edge network | |
Feenan et al. | Clustering Web Accelerators | |
Zhang | Dynamic Load Balance Strategy Based on Hash Slots in Distributed Storage Environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201110 |