CN111930316B - Cache read-write system and method for content distribution network - Google Patents
Cache read-write system and method for content distribution network Download PDFInfo
- Publication number
- CN111930316B CN111930316B CN202010937835.5A CN202010937835A CN111930316B CN 111930316 B CN111930316 B CN 111930316B CN 202010937835 A CN202010937835 A CN 202010937835A CN 111930316 B CN111930316 B CN 111930316B
- Authority
- CN
- China
- Prior art keywords
- cache
- data
- hard disk
- storage
- memory
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/13—File access structures, e.g. distributed indices
- G06F16/137—Hash-based
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/16—File or folder operations, e.g. details of user interfaces specifically adapted to file systems
- G06F16/162—Delete operations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/955—Retrieval from the web using information identifiers, e.g. uniform resource locators [URL]
- G06F16/9566—URL specific, e.g. using aliases, detecting broken or misspelled links
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0656—Data buffering arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/568—Storing data temporarily at an intermediate stage, e.g. caching
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Data Mining & Analysis (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a cache read-write system of a content distribution network, which is applied to edge nodes of a CDN network and comprises a storage unit, a cache read-write unit, a memory storage processing unit, a hard disk storage processing unit, an abc file cache system and a data migration unit.
Description
Technical Field
The present invention relates to the field of content distribution, and in particular, to a caching system and method for a content distribution network.
Background
In the existing content distribution technology, most of the cached data are written into a certain directory under the current linux file system, and the method has the following problems:
1. the format of the storage files of the current linux file systems (such as ext2, ext3 and ext 4) is depended on, and the tree-shaped directory structure of the file systems is not beneficial to the storage and management of the cache;
2. files are stored in a single partition of a disk, such as a logic partition, so that the complete disk space cannot be used, and the cost of the disk space is greatly increased at the later stage;
3. the writing efficiency of large files is too low, because data is written into a single disk, single file and multi-point storage cannot be realized;
4. when reading is performed at high concurrency, the IO pressure of the disk is huge, so that the reading and writing speed of the disk becomes slow.
Disclosure of Invention
It is an object of the present invention to provide a multi-level caching solution for a content distribution network, thereby solving the above-mentioned problems of the prior art.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
a cache read-write system of a content delivery network is applied to an edge node of a CDN network, and is characterized by comprising: the device comprises a storage unit, a cache read-write unit, a memory storage processing unit, a hard disk storage processing unit, an abc file cache system and a data migration unit, wherein: a storage unit: comprises a memory and a hard disk group; a cache read-write unit: receiving a read-write request from the outside; a memory storage processing unit: processing the storage resources by using an LRU algorithm, and determining the priority of resource deletion; hard disk storage processing unit: the system comprises a data storage module, a data processing module and a data processing module, wherein the data storage module is used for mapping a hard disk into an HASH ring and processing storage resources by using a consistent HASH algorithm so that each HASH value is mapped onto the hard disk in the HASH ring to realize the reading and writing of the hard disk content; the abc file caching system comprises: the abc file caching system is used for caching large object data by splitting the large object data into a plurality of small fragment data, wherein each small fragment data is allocated with a HASH key value; a data migration unit: and the data migration module is used for processing data migration among the storage units according to the migration strategy.
Preferably, the hard disk group comprises a solid state hard disk group and a mechanical hard disk group.
Preferably, the migration policy includes determining a value of the resource and a priority of the storage unit, where the determining factor of the value of the resource includes an access frequency of the resource, a resource size, and a storage medium size, and the determining factor of the priority of the storage unit includes a read-write speed and a storage space of the storage unit.
The invention also discloses a cache read-write method of the content distribution network, which comprises the following steps:
s1: receiving an HTTP resource request sent by a content request end;
s2: reading whether a cache of the HTTP resource exists on a local node;
s3: if the HTTP resource cache does not exist, the HTTP resource request is forwarded to other network nodes, and HTTP resource data are waited to be returned from other network nodes;
s4: and receiving response data from other network nodes, writing the response data into a local cache, and simultaneously returning the response data to the content request terminal.
Preferably, in step S2, the method for reading the local node cache includes:
s21: using the URL address of the HTTP resource as a cache key value to obtain cached metadata, wherein the HTTP resource is stored as a plurality of data segments, and the metadata comprises the cache key values of all the data segments;
s22: obtaining cache contents of each data segment according to the cache key values of the obtained data segments in the metadata;
s23: and assembling according to the obtained content of each data fragment, obtaining HTTP resources, and finishing reading.
Preferably, in step S22, the obtaining, according to the cache key value of the obtained data segment in the metadata, the cache content of each data segment includes:
s221: reading cache data from the memory: if the reading is successful, modifying the updating time of the cache, and exiting;
s222: reading the cache data from the solid state disk group: the method specifically comprises the following steps:
s2221: according to a consistent HASH algorithm, finding the solid state disk where the cache key value is located from the solid state disk group, and finding specific index data according to an index table of the cache key value in the abc file cache system;
s2222: finding the offset position of the cache data storage according to the specific index data, and acquiring the cache;
s2223: if the cache is successfully obtained in the solid state hard disk group, migrating the cache data to the memory according to a preset migration strategy, deleting the cache data with the latest updating time, and checking the memory capacity, wherein the memory capacity comprises the maximum capacity of the memory;
s2224: if the cache acquisition in the solid state hard disk group fails, acquiring cache data from the mechanical hard disk group;
s223: obtaining cache data from a mechanical hard disk group: the method specifically comprises the following steps:
s2231: similarly, according to a consistent HASH algorithm, finding out the mechanical hard disk where the key value is located from a group of mechanical hard disk groups, and finding out specific index data according to the key value in an index table of the abc file cache system;
s2232: finding the offset position of the cache data storage according to the specific index data, and acquiring the cache;
s2233: and if the cache is successfully acquired in the mechanical hard disk group, migrating the cache data to the solid state hard disk according to a preset migration strategy.
Preferably, in step S4, the method for writing into the cache includes:
s41: dividing response data of the HTTP resource request into a plurality of data segments, generating a cache key value for each data segment, and writing the data segments into a cache according to the cache key values;
s42: and after all the fragments are written, writing metadata, wherein the metadata comprises all cache key values.
Preferably, in S41, the method for writing into the cache according to the cache key value includes:
judging the size of the written cache, and determining a written target cache module;
according to a consistent HASH algorithm, finding a cache module to be written in the key value from the storage unit, finding an offset position written in by the current cache module from the head of the abc file cache system, and writing cache data from the offset position;
and generating index data according to the offset of the written cache data, and writing the index data into an index table.
Preferably, the determining the written target cache module according to the written cache size specifically includes: if the buffer of writes is less than 1M, then write to the solid state disk group, otherwise write to the mechanical disk group.
The invention also discloses an electronic device, which comprises a processor and a memory, wherein the memory is stored with an executable program, and when the executable program runs on a computer, the computer executes the method and the system of any embodiment.
The invention has the beneficial effects that: the invention adopts the three-level cache technology, and effectively utilizes the maximum storage space of the single machine; the HASH ring is formed by a plurality of hard disk groups, so that the performance bottleneck caused by a single hard disk is solved; the consistent HASH algorithm technology is used, data can be uniformly distributed to each hard disk, the pressure of each hard disk is averaged, dynamic expansion is supported, and the problem can be solved by increasing the hard disks under the condition that the pressure of the hard disks is increased. In this way, the operation and maintenance cost can be effectively reduced.
In addition, the invention utilizes the fragment caching technology, directly uses the disk when caching the large object, does not need to partition the hard disk, does not need to install a traditional file system, can fully utilize the disk space, avoids the space waste of the disk caused by partitioning and installing the file system, and can save the operation cost.
In order that the invention may be more clearly and fully understood, specific embodiments thereof are described in detail below with reference to the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments will be briefly introduced below. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a block diagram showing a cache read-write system of a content distribution network in one embodiment.
Fig. 2 is a schematic diagram illustrating a cache read-write flow of the content distribution network in an embodiment.
FIG. 3 is a flow diagram that illustrates reading a local node cache in one embodiment.
Fig. 4 is a schematic flowchart illustrating a process of obtaining cache contents of each data segment in an embodiment.
Detailed Description
Referring to fig. 1, fig. 1 is a block diagram illustrating a cache read-write system of a content distribution network in an embodiment, where the system includes a storage unit 10, a cache read-write unit 11, a memory storage processing unit 12, a hard disk storage processing unit 13, an abc file cache system 14, and a data migration unit 15, where:
the storage unit 10 includes a memory 101 and a hard disk set 102; disk pack 102 includes a solid state disk pack and a mechanical disk pack.
The cache read-write unit 11: and receiving a read-write request from the outside.
The memory storage processing unit 12: the storage resources are processed using the LRU algorithm to determine the priority of resource deletion. Because the memory cannot be expanded due to the particularity of the storage medium of the memory, the storage size of the memory space is limited, since the resources in other storage media are migrated to the memory, and the memory has size limitation, when the memory space exceeds the limit, some resources must be eliminated (deleted), and some resources need to be preferentially deleted.
Hard disk storage processing unit 13: the method is used for mapping the hard disk into the HASH ring, and processing the storage resource by using a consistent HASH algorithm, so that each HASH value is mapped onto the hard disk in the HASH ring, and the reading and writing of the hard disk content are realized.
The main problem of the conventional HASH table is that all the keys need to be remapped because a slot is added or deleted, whereas the consistent HASH algorithm only needs to remap K/n keys, where K is the number of keys and n is the number of slots.
For example, in an embodiment, a total of 10 solid state disks, the 10 solid state disks are grouped into a group and mapped to different positions of a virtual ring, where the virtual ring is referred to as a HASH ring, and the position of each solid state disk in the ring is referred to as a slot, so that the system reads the group of solid state disks by the following steps:
1) firstly, HASH is carried out on a cache key value, and each HASH value is mapped to be a fixed position in a HASH ring;
2) searching along one direction on the HASH ring edge until a slot position of a certain solid state disk is met;
3) the solid state disk is the position to which the cache should be accessed;
when a certain solid state disk fails and crashes, the solid state disk can be evenly distributed to other solid state disk nodes, and HASH rearrangement can not be caused due to the crash of the solid state disk, so that a cache avalanche effect is caused.
When a solid state disk needs to be added, namely a slot position is added to the HASH ring, the position of each solid state disk in the ring is finely adjusted, so that resources can be evenly distributed to the solid state disks, and HASH rearrangement cannot occur.
abc file caching system 14: the abc file caching system is used for caching large object data by splitting the large object data into a plurality of small fragment data, wherein each small fragment data is allocated with a HASH key value. The abc file cache system 14 reads and writes each cache segment, but not caches large object data, so that the system can read and write each cache segment in parallel, the reading and writing speed of a large file is greatly improved, a hard disk does not need to be partitioned, a traditional file system does not need to be installed, the space of a disk can be fully utilized, the space waste caused by partitioning and installation of the file system of the disk is avoided, and the operation cost can be saved.
The data migration unit 15: and the data migration module is used for processing data migration among the storage units according to the migration strategy.
In this embodiment, different storage units (i.e., storage media) exist, and the read-write speeds of different storage media are different, for example, the read-write speed of a memory is much higher than that of a solid state disk, which is much higher than that of a mechanical hard disk, and the storage space of the memory is limited, which is not as cheap as that of the mechanical hard disk, so that more valuable data needs to be placed on the storage medium that is read and written more quickly, and less valuable data needs to be placed on the storage medium that is read and written at a slower speed. This then involves the migration of different data between the various storage media.
The factors determining the resource value are many, such as the user access frequency of the resource, the resource size, the storage medium size, and the like, for example, if the user accesses a resource frequently and migrates the resource to a storage medium with a higher priority, the probability that the resource is accessed in the future is higher, and the file with a smaller cache is frequently subjected to disk seek.
The migration strategy is related to the value of the resource and the priority of the storage unit, wherein the determining factors of the value of the resource comprise the access frequency of the resource, the size of the storage medium, and the determining factors of the priority of the storage unit comprise the read-write speed and the storage space of the storage unit.
Different migration strategies are hit by different resources according to the storage media needing migration and the size of the resources, for example, two strategies are provided for migration to the memory, one is a strategy smaller than 512k, and the other is a strategy from 512k to 2M. Each policy is configured with a different access frequency, and only after the access frequency is reached, the resource is allowed to be migrated to the target medium.
In this embodiment, the specific migration method is:
1) when the resource hits the cache, the storage medium where the cache is located is obtained;
2) if the solid state disk is in the memory, acquiring whether a migration strategy for migrating the memory exists or not according to the size of the resources of the solid state disk;
3) if the migration strategy exists, judging whether the access frequency of the resource reaches the access frequency preset by the migration strategy;
4) if the access frequency preset by the migration strategy is reached, migrating the resources to the memory;
5) if the resource is in the mechanical hard disk, acquiring whether a migration strategy for migrating the resource to the memory or the solid state disk exists or not according to the size of the resource;
6) if the migration strategy exists, judging whether the access frequency of the resource reaches the access frequency preset by the migration strategy of the migration to the target storage medium;
7) and if the access frequency preset by the migration strategy is reached, migrating the resources to the corresponding storage medium (if the strategy of migrating to the memory and the strategy of migrating to the solid state disk are both in accordance, the strategy of migrating to the memory is preferentially selected).
Referring to fig. 2, fig. 2 is a schematic diagram illustrating a cache read-write flow of a content distribution network in an embodiment, which specifically includes:
s1: receiving an HTTP resource request sent by a content request end;
s2: reading whether a cache of the HTTP resource exists on a local node;
s3: if the cache of the HTTP resource does not exist, forwarding the HTTP resource to other network nodes;
s4: and receiving response data from other network nodes, writing the response data into a local cache, and simultaneously returning the response data to the content request terminal.
Referring to fig. 3, a schematic flow chart of reading the local node cache in an embodiment specifically includes:
s21: using the URL address of the HTTP resource as a cache key value to obtain cached metadata, wherein the HTTP resource is stored as a plurality of data segments, and the metadata comprises the cache key values of all the data segments;
s22: obtaining cache contents of each data segment according to the cache key values of the obtained data segments in the metadata;
s23: and assembling according to the obtained content of each data fragment, obtaining HTTP resources, and finishing reading.
Referring to fig. 4, a schematic flow chart of obtaining the cache content of each data segment in an embodiment specifically includes:
s221: reading cache data from the memory: if the reading is successful, modifying the updating time of the cache, and exiting;
s222: reading cache data from the solid state hard disk group;
s223: cache data is obtained from the mechanical hard disk group.
In this embodiment, the step of reading the cached data from the solid state disk group comprises:
s2221: according to a consistent HASH algorithm, finding the solid state disk where the cache key value is located from the solid state disk group, and finding specific index data according to an index table of the cache key value in the abc file cache system;
s2222: finding the offset position of the cache data storage according to the specific index data, and acquiring the cache;
s2223: if the cache is successfully obtained in the solid state hard disk group, migrating the cache data to the memory according to a preset migration strategy, deleting the cache data with the latest updating time, and checking the memory capacity, wherein the memory capacity comprises the maximum capacity of the memory;
s2224: if the cache acquisition in the solid state hard disk group fails, cache data is acquired from the mechanical hard disk group.
In this embodiment, the step of obtaining the cached data from the mechanical hard disk group comprises:
s2231: similarly, according to a consistent HASH algorithm, finding out the mechanical hard disk where the key value is located from a group of mechanical hard disk groups, and finding out specific index data according to the key value in an index table of the abc file cache system;
s2232: finding the offset position of the cache data storage according to the specific index data, and acquiring the cache;
s2233: and if the cache is successfully acquired in the mechanical hard disk group, migrating the cache data to the solid state hard disk according to a preset migration strategy.
An embodiment of the present application further provides an electronic device, where the electronic device includes a processor and a memory, where the memory stores an executable program, and when the executable program runs on a computer, the computer executes the method and the system described in any of the above embodiments.
It should be noted that, all or part of the steps in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, which may include, but is not limited to: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (10)
1. A cache read-write system of a content delivery network is applied to an edge node of a CDN network, and is characterized by comprising: the device comprises a storage unit, a cache read-write unit, a memory storage processing unit, a hard disk storage processing unit, an abc file cache system and a data migration unit, wherein:
a storage unit: comprises a memory and a hard disk group;
a cache read-write unit: receiving a read-write request from the outside;
a memory storage processing unit: processing the storage resources by using an LRU algorithm, and determining the priority of resource deletion;
hard disk storage processing unit: the system comprises a data storage module, a data processing module and a data processing module, wherein the data storage module is used for mapping a hard disk into an HASH ring and processing storage resources by using a consistent HASH algorithm so that each HASH value is mapped onto the hard disk in the HASH ring to realize the reading and writing of the hard disk content;
the abc file caching system comprises: the abc file cache system does not need to process a hard disk, namely, does not need to partition the hard disk and install a file system, and caches large object data by splitting the large object data into a plurality of small fragment data, wherein each small fragment data is allocated with a HASH key value;
a data migration unit: and the data migration module is used for processing data migration among the storage units according to the migration strategy.
2. The system of claim 1, wherein the disk pack comprises a solid state disk pack, a mechanical disk pack.
3. The system of claim 1, wherein the hard disk groups comprise solid state hard disk groups and mechanical hard disk groups, the migration policy comprises determining a value of a resource and a priority of a storage unit, the determining factors of the value of the resource comprise an access frequency of the resource, a resource size, a storage medium size, and the determining factors of the priority of the storage unit comprise a read-write speed and a storage space of the storage unit.
4. A cache read-write method for a content distribution network, for use in a cache read-write unit of the content distribution network according to claims 1-3, the method comprising:
s1: receiving an HTTP resource request sent by a content request end;
s2: reading whether a cache of the HTTP resource exists on a local node;
s3: if the HTTP resource cache does not exist, the HTTP resource request is forwarded to other network nodes, and HTTP resource data are waited to be returned from other network nodes;
s4: and receiving response data from other network nodes, writing the response data into a local cache, and simultaneously returning the response data to the content request terminal.
5. The method as claimed in claim 4, wherein in step S2, the method for reading the local node cache comprises:
s21: using the URL address of the HTTP resource as a cache key value to obtain cached metadata, wherein the HTTP resource is stored into a plurality of data segments, each data segment is distributed with the cache key value, and the metadata comprises the cache key values of all the data segments;
s22: obtaining cache contents of each data segment according to the cache key values of the obtained data segments in the metadata;
s23: and assembling according to the obtained content of each data fragment, obtaining HTTP resources, and finishing reading.
6. The method as claimed in claim 5, wherein the step S22, according to the cache key value of the obtained data segment in the metadata, the method for obtaining the cache content of each data segment includes:
s221: reading cache data from the memory: if the reading is successful, modifying the updating time of the cache, and exiting;
s222: reading the cache data from the solid state disk group: the method specifically comprises the following steps:
s2221: according to a consistent HASH algorithm, finding the solid state disk where the cache key value is located from the solid state disk group, and finding specific index data according to an index table of the cache key value in the abc file cache system;
s2222: finding the offset position of the cache data storage according to the specific index data, and acquiring the cache;
s2223: if the cache is successfully obtained in the solid state hard disk group, migrating the cache data to the memory according to a preset migration strategy, deleting the cache data with the latest updating time, and checking the memory capacity, wherein the memory capacity comprises the maximum capacity of the memory;
s2224: if the cache acquisition in the solid state hard disk group fails, acquiring cache data from the mechanical hard disk group;
s223: obtaining cache data from a mechanical hard disk group: the method specifically comprises the following steps:
s2231: similarly, according to a consistent HASH algorithm, finding out the mechanical hard disk where the key value is located from a group of mechanical hard disk groups, and finding out specific index data according to the key value in an index table of the abc file cache system;
s2232: finding the offset position of the cache data storage according to the specific index data, and acquiring the cache;
s2233: and if the cache is successfully acquired in the mechanical hard disk group, migrating the cache data to the solid state hard disk according to a preset migration strategy.
7. The method as claimed in claim 4, wherein in step S4, the writing method of the cache memory comprises:
s41: dividing response data of the HTTP resource request into a plurality of data segments, generating a cache key value for each data segment, and writing the data segments into a cache according to the cache key values;
s42: and after all the fragments are written, writing metadata, wherein the metadata comprises all cache key values.
8. The method as claimed in claim 7, wherein in S41, the method for writing to the cache according to the cache key value comprises:
judging the size of the written cache, and determining a written target cache module;
according to a consistent HASH algorithm, finding a cache module to be written in the key value from the storage unit, finding an offset position written in by the current cache module from the head of the abc file cache system, and writing cache data from the offset position;
and generating index data according to the offset of the written cache data, and writing the index data into an index table.
9. The method of claim 8, wherein the determining the written cache size of the target cache module is specifically: if the buffer of writes is less than 1M, then write to the solid state disk group, otherwise write to the mechanical disk group.
10. An electronic device comprising a processor and a memory, said memory having stored therein an executable program, said computer performing the method of claims 4-9 when said executable program is run on the computer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010937835.5A CN111930316B (en) | 2020-09-09 | 2020-09-09 | Cache read-write system and method for content distribution network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010937835.5A CN111930316B (en) | 2020-09-09 | 2020-09-09 | Cache read-write system and method for content distribution network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111930316A CN111930316A (en) | 2020-11-13 |
CN111930316B true CN111930316B (en) | 2021-04-20 |
Family
ID=73310113
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010937835.5A Active CN111930316B (en) | 2020-09-09 | 2020-09-09 | Cache read-write system and method for content distribution network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111930316B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112468583B (en) * | 2020-11-26 | 2023-09-15 | 福建天泉教育科技有限公司 | Information processing method and terminal of API gateway |
CN112506434A (en) * | 2020-12-11 | 2021-03-16 | 杭州安恒信息技术股份有限公司 | Method and related device for reading and writing data in web micro-service cluster |
CN112650720B (en) * | 2020-12-18 | 2024-07-12 | 深圳市佳创视讯技术股份有限公司 | Cache system management method and device and computer readable storage medium |
CN112764681B (en) * | 2021-01-21 | 2024-02-13 | 上海七牛信息技术有限公司 | Cache elimination method and device with weight judgment and computer equipment |
CN112699154B (en) * | 2021-03-25 | 2021-06-18 | 上海洋漪信息技术有限公司 | Multi-level caching method for large-flow data |
CN113220233A (en) * | 2021-05-14 | 2021-08-06 | 北京百度网讯科技有限公司 | Data reading method, device and system |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106293490A (en) * | 2015-05-12 | 2017-01-04 | 中兴通讯股份有限公司 | Data storage, the method read, Apparatus and system |
CN106790324B (en) * | 2015-11-20 | 2020-06-16 | 华为技术有限公司 | Content distribution method, virtual server management method, cloud platform and system |
CN106230971B (en) * | 2016-08-29 | 2019-03-01 | 无锡华云数据技术服务有限公司 | Big document distribution method based on CDN |
CN107463342B (en) * | 2017-08-28 | 2021-04-20 | 北京奇艺世纪科技有限公司 | CDN edge node file storage method and device |
CN110096227B (en) * | 2019-03-28 | 2023-04-18 | 北京奇艺世纪科技有限公司 | Data storage method, data processing device, electronic equipment and computer readable medium |
-
2020
- 2020-09-09 CN CN202010937835.5A patent/CN111930316B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN111930316A (en) | 2020-11-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111930316B (en) | Cache read-write system and method for content distribution network | |
US11010300B2 (en) | Optimized record lookups | |
US11093454B2 (en) | Speeding deduplication using a most wanted digest cache | |
CN108459826B (en) | Method and device for processing IO (input/output) request | |
US5991847A (en) | Data pattern caching for speeding up write operations | |
US9934231B2 (en) | System and methods for prioritizing data in a cache | |
EP1569085B1 (en) | Method and apparatus for increasing data storage capacity | |
US9805048B2 (en) | System and method for managing a deduplication table | |
US10585594B1 (en) | Content-based caching using digests | |
CN103399823B (en) | The storage means of business datum, equipment and system | |
CN107153512B (en) | Data migration method and device | |
CN107888687B (en) | Proxy client storage acceleration method and system based on distributed storage system | |
CN115794669A (en) | Method, device and related equipment for expanding memory | |
CN111443874B (en) | Solid-state disk memory cache management method and device based on content awareness and solid-state disk | |
US20190272258A1 (en) | Region-integrated data deduplication implementing a multi-lifetime duplicate finder | |
US20240264773A1 (en) | Data Prefetching Method, Computing Node, and Storage System | |
CN114546898A (en) | TLB management method, device, equipment and storage medium | |
Guo et al. | HP-mapper: A high performance storage driver for docker containers | |
CN109002400B (en) | Content-aware computer cache management system and method | |
KR102321346B1 (en) | Data journaling method for large solid state drive device | |
CN103491124A (en) | Method for processing multimedia message data and distributed cache system | |
CN111611223A (en) | Nonvolatile data access method, system, electronic device and medium | |
CN107967306B (en) | Method for rapidly mining association blocks in storage system | |
CN102508790A (en) | Content-based cache method applied to content analysis storage | |
US11132128B2 (en) | Systems and methods for data placement in container-based storage systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |