CN111984191A - Multi-client caching method and system supporting distributed storage - Google Patents
Multi-client caching method and system supporting distributed storage Download PDFInfo
- Publication number
- CN111984191A CN111984191A CN202010780003.7A CN202010780003A CN111984191A CN 111984191 A CN111984191 A CN 111984191A CN 202010780003 A CN202010780003 A CN 202010780003A CN 111984191 A CN111984191 A CN 111984191A
- Authority
- CN
- China
- Prior art keywords
- cache
- metadata
- data
- client
- distributed storage
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 28
- 238000012544 monitoring process Methods 0.000 claims abstract description 11
- 230000002159 abnormal effect Effects 0.000 claims description 4
- 238000004891 communication Methods 0.000 claims description 4
- 230000004044 response Effects 0.000 claims description 4
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000000926 separation method Methods 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 230000002688 persistence Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
- G06F3/0616—Improving the reliability of storage systems in relation to life time, e.g. increasing Mean Time Between Failures [MTBF]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
- G06F3/0617—Improving the reliability of storage systems in relation to availability
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0656—Data buffering arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention provides a multi-client cache system and a method supporting distributed storage, wherein the method comprises the following steps: module M1: the cache client synchronously loads cache data to local cache equipment and generates a corresponding cache metadata file to submit to a cache server; module M2: the cache server side performs unified management on the cache metadata files, and provides metadata services of centralized state monitoring, configuration management and global cache data positioning for the whole distributed storage system. According to the invention, the local cache data is used for reading, so that the network data reading time delay is reduced, the bandwidth is released, and the overall reading and writing efficiency of distributed storage is improved.
Description
Technical Field
The invention relates to the field of distributed storage, in particular to a multi-client caching method and a multi-client caching system supporting distributed storage.
Background
In a distributed storage system, data is stored in different physical nodes of a server, and cache acceleration of data at the server is realized through high-speed storage equipment such as an SSD (solid state disk) and a memory, but in most scenarios, a remote client needs to perform cross-node data access through a network. At this time, the time delay for remote data reading over the network is much higher than the direct access to the data in the local storage device. On the other hand, data transmission through the network generally needs to pass through a network protocol stack, so that the processing overhead is very large, and particularly for IO-intensive applications, the reading and writing efficiency and the processing efficiency of the applications are seriously influenced. Therefore, the invention provides a method for realizing multi-client distributed cache on a distributed storage system, and the overall performance of the distributed storage system is improved by improving local data access and reducing access delay and system resource overhead related to a network.
Patent document CN102541983B (application number: 201110326365.X) discloses a method for synchronizing caches of multiple clients in a distributed file system, which uses a metadata server as a control node of cache information of a client, and records storage states of the clients on the metadata server for index nodes maintained on the metadata server; dividing metadata information into a read-only cache and a writable cache according to the cache attribute of a client; for the metadata read-only cache, when a client reads metadata for the first time, a metadata server grants a read-only permission or a writable cache permission to the client, and the permission is always held after the operation is finished; and for the metadata writable cache, the modification of the client is temporarily stored locally, and the write-back is carried out when the write-back triggering condition is met.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a multi-client cache system and a method for supporting distributed storage.
The invention provides a multi-client cache system supporting distributed storage, which comprises:
module M1: the cache client synchronously loads cache data to local cache equipment and generates a corresponding cache metadata file to submit to a cache server;
module M2: the cache server side performs unified management on the cache metadata files, and provides metadata services of centralized state monitoring, configuration management and global cache data positioning for the whole distributed storage system.
Preferably, said module M1 comprises: and establishing a cache metadata file for the cache data in a key-value pair mode.
Preferably, said module M1 comprises: the cache client interacts with the cache server in the same node in a content sharing mode, and also interacts with the cache servers on other nodes through a socket.
Preferably, the metadata file includes a data distribution of the cache data, a data copy, a data content checksum data version.
Preferably, the module M2 includes:
module M2.1: the cache server writes, reads, updates and deletes the metadata file to the cache metadata file,
module M2.2: and the cache server performs in-node cache data distribution, cache selection and replacement on the cache metadata file.
Module M2.3: the cache server continuously monitors the preset port to provide response service for cross-node requests from other cache clients.
Preferably, said module M2.1 comprises: when the file is read and written through the posix mode, specific file operation is captured through the read and write filter.
Preferably, said module M2.1 comprises: the writing of the metadata file comprises performing distributed cache writing operation by using SSD equipment;
the reading of the metadata file comprises: when the client accesses the local cache data, the client establishes communication with the cache metadata file, reads the metadata information of the cache data and queries the cache data;
and reading the metadata file by using an HDD and NVMe storage equipment to read the cache data.
Preferably, said module M2.2 comprises: when the residual capacity space of the cache space is enough, selecting cache data to be added into the cache; when the cache space is insufficient, selecting partial data from the cache data according to the access frequency for replacement; according to the increase of the data access frequency, the cache data is migrated from the low-speed HDD to the high-speed SSD device, and the subsequent access speed is improved.
Preferably, the cache metadata file includes: the method comprises the steps that cache metadata files are deployed on metadata nodes, the cache metadata files adopt a main/standby mode to construct a metadata service cluster, the main metadata nodes provide metadata services for the outside, and when a main node is down or a network is abnormal, the standby nodes take over and continuously provide the services for the outside;
the metadata service cluster constructed in the main and standby mode realizes metadata synchronization based on a time check point and heartbeat, the consistency of the metadata of the main and standby nodes is checked through the heartbeat interval time and a fixed time check point between the main and standby metadata nodes, and when the metadata information is inconsistent, the metadata of the standby node is updated.
The multi-client caching method supporting distributed storage provided by the invention comprises the following steps:
step M1: the cache client synchronously loads cache data to local cache equipment and generates a corresponding cache metadata file to submit to a cache server;
step M2: the cache server side performs unified management on the cache metadata files, and provides metadata services of centralized state monitoring, configuration management and global cache data positioning for the whole distributed storage system.
Compared with the prior art, the invention has the following beneficial effects:
1. according to the invention, the local cache data is used for reading, so that the network data reading time delay is reduced, the bandwidth is released, and the overall reading and writing efficiency of distributed storage is improved;
2. the invention adopts a cache read-write separation strategy to distinguish the cache devices of sequential read-write, random read and random write operations, the HDD is responsible for the sequential read-write operations, the SSD is responsible for the random read operations, the NVMe is responsible for the random write operations, and the strategies are divided into finer granularity, thereby improving the utilization rate of each storage device;
3. under the condition of the same bandwidth, the method effectively reduces the high bandwidth pressure caused by the access of repeated data, and improves the processing efficiency of IO intensive application.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a system architecture diagram;
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
Example 1
The invention provides a multi-client cache system supporting distributed storage, which comprises:
module M1: the cache client synchronously loads cache data to local cache equipment and generates a corresponding cache metadata file to submit to a cache server;
module M2: the cache server side performs unified management on the cache metadata files, and provides metadata services of centralized state monitoring, configuration management and global cache data positioning for the whole distributed storage system.
Specifically, the module M1 includes: and establishing a cache metadata file for the cache data in a key-value pair mode.
Specifically, the module M1 includes: the cache client interacts with the cache server in the same node in a content sharing mode, and also interacts with the cache servers on other nodes through a socket.
Specifically, the metadata file includes data distribution, data copy, data content check sum and data version of the cache data.
Specifically, the module M2 includes:
module M2.1: the cache server writes, reads, updates and deletes the metadata file to the cache metadata file,
module M2.2: and the cache server performs in-node cache data distribution, cache selection and replacement on the cache metadata file.
Module M2.3: the cache server continuously monitors the preset port to provide response service for cross-node requests from other cache clients.
In particular, said module M2.1 comprises: when the file is read and written through the posix mode, specific file operation is captured through the read and write filter.
In particular, said module M2.1 comprises: the writing of the metadata file comprises performing distributed cache writing operation by using SSD equipment;
the reading of the metadata file comprises: when the client accesses the local cache data, the client establishes communication with the cache metadata file, reads the metadata information of the cache data and queries the cache data;
and reading the metadata file by using an HDD and NVMe storage equipment to read the cache data.
In particular, said module M2.2 comprises: when the residual capacity space of the cache space is enough, selecting cache data to be added into the cache; when the cache space is insufficient, selecting partial data from the cache data according to the access frequency for replacement; according to the increase of the data access frequency, the cache data is migrated from the low-speed HDD to the high-speed SSD device, and the subsequent access speed is improved.
Specifically, the caching metadata file includes: the method comprises the steps that cache metadata files are deployed on metadata nodes, the cache metadata files adopt a main/standby mode to construct a metadata service cluster, the main metadata nodes provide metadata services for the outside, and when a main node is down or a network is abnormal, the standby nodes take over and continuously provide the services for the outside;
the metadata service cluster constructed in the main and standby mode realizes metadata synchronization based on a time check point and heartbeat, the consistency of the metadata of the main and standby nodes is checked through the heartbeat interval time and a fixed time check point between the main and standby metadata nodes, and when the metadata information is inconsistent, the metadata of the standby node is updated.
The multi-client caching method supporting distributed storage provided by the invention comprises the following steps:
step M1: the cache client synchronously loads cache data to local cache equipment and generates a corresponding cache metadata file to submit to a cache server;
step M2: the cache server side performs unified management on the cache metadata files, and provides metadata services of centralized state monitoring, configuration management and global cache data positioning for the whole distributed storage system.
Example 2
Example 2 is a modification of example 1
In view of the above-mentioned drawbacks of the prior art, the technical problems to be solved by the present invention are as follows:
as shown in fig. 1:
1) distributed data read localization: the method mainly solves the problem of high network and system overhead caused by the fact that data needs to be transmitted through a network when distributed data are read through a client.
A client caching mechanism is proposed: after receiving a data reading request, a client synchronously reads data actually stored in the distributed storage system into local cache equipment, and when a file of the reading request is in the local cache, the client directly performs reading operation from the local, so that data access delay and network bandwidth pressure are reduced;
2) client cache read-write strategy: the method mainly solves the problem of service life of the device when the client uses the SSD storage device for cache acceleration, and the SSD has great defects in service life compared with a common HDD.
Capturing data read-write operation through a client, directly performing the read operation in HDD (hard disk drive) equipment, and distributing the write operation in SSD equipment;
3) cache data consistency: the method mainly solves the problem of the consistency of the localized data cached by multiple clients.
And establishing a metadata description file for the cache data in a key value pair mode, wherein each cache data corresponds to a metadata file containing information such as data distribution, data copy, data content verification, data version and the like of the cache data.
The problems of high data access delay and high overhead caused by network transmission due to the fact that data are not local when a distributed file system reads and writes at a client are solved.
The system adopts localized data reading, when a user reads and writes a file in a posix mode, specific file operations such as createfile, readfile, writefile, closefile and the like are captured through a read-write filter, and corresponding read-write caching operations are carried out according to the read-write operations of the user; meanwhile, the system adopts a cache data metadata management and read-write cache separation mechanism, a local data cache is constructed at a client of the distributed storage system, the storage equipment such as an HDD (hard disk drive), an NVMe (network video record) and the like is used for reading cache data, the SSD equipment is used for performing distributed cache writing operation, the cache client generates metadata information of the cache data, the metadata information is submitted to a metadata service module through a cache service module, and the metadata of the cache data is uniformly managed, including operations such as writing, reading, updating and deleting of the metadata. When the client accesses the local cache data, the client establishes communication with the cache metadata service and inquires the distribution, consistency and the like of the cache data by reading the metadata information of the cache data,
the metadata file only records a small amount of data information, so the influence on the network bandwidth is almost negligible. By the aid of a distributed client cache mechanism, huge network overhead caused by repeated data reading is greatly reduced, local storage space cache is used for replacing network reading time, and space time is used for improving reading and writing efficiency of a distributed storage system.
The system is a distributed storage system design and distributed multi-client cache realization method under the IO intensive application scene, and the method comprises the following specific realization steps:
based on the steps of the method, the distributed storage multi-client cache system is realized and mainly comprises the following modules.
1) Cache metadata service module
The cache metadata service module is deployed on a metadata node in fig. 1, receives metadata information submitted by a cache server and performs unified management, provides metadata services such as centralized state monitoring, configuration management and global cache data positioning for the whole distributed storage system, can adopt a main/standby mode to construct a high-availability metadata service cluster, provides the metadata services to the outside by a main metadata node, and takes over and continuously provides the services to the outside by a standby node when a main node is down or a network is abnormal, so as to ensure high availability of the metadata services. The cache metadata service provides the state monitoring and configuration management functions of the multi-client cache;
providing a metadata-based mode for carrying out distributed searching and positioning on cache data;
constructing a highly available global metadata service in a master-slave mode, realizing metadata synchronization based on a time check point and a heartbeat, checking the metadata consistency of master-slave nodes through a heartbeat interval time and a fixed time check point between the master-slave nodes, and updating the metadata of the master-slave nodes when the metadata information is inconsistent;
maintaining global metadata cached by multiple clients, and mainly realizing searching mapping from cached data to cache nodes;
maintaining local metadata of a cache node level, and mainly realizing searching mapping and tracking of cache states in nodes of cache data;
the method has the advantages that the persistence function of caching the metadata is provided, the metadata is directly searched in the memory, the query efficiency is accelerated, and the metadata is directly persisted to the storage device for the updating operation of the metadata.
2) Cache server module
The cache server module is deployed on the client node in fig. 1, and is responsible for managing cache metadata on the client, performing operations such as intra-node cache data distribution, cache selection and replacement, and continuously monitoring a certain port, and providing response service for cross-node requests from other cache clients.
Performing management functions such as cache selection, replacement, migration and the like according to the global metadata and the node level metadata, and when the residual capacity of a cache space is available, selecting cache data to add into a cache; when the cache space is insufficient, selecting partial data from the cache data according to the access frequency for replacement; according to the increase of the data access frequency, the cache data is migrated from the low-speed HDD to the high-speed SSD device so as to improve the subsequent access speed.
And continuously monitoring a certain port to provide service for cross-node requests from other cache clients, so as to realize high availability of the cache of the client.
Cache client module
And the cache client module is responsible for synchronously loading cache data to local cache equipment and generating a corresponding metadata file to submit to the cache server module. And executing operations such as cache data access, and the like, interacting the cache client with the cache server in the same node in a manner of sharing content and the like, and interacting with the cache server on other nodes through the socket when necessary.
The method supports write cache policy management, and mainly comprises the following cache policies:
write-only caching: only the read data is cached, and the write data is not cached;
and (3) synchronous writing cache: data is written into a cache and stored at the back end at the same time;
asynchronous write caching: data is written into the cache only, and is written back to the back-end storage when cache replacement is needed;
the method supports the separation of read-write operations, filters different file operations, and executes different strategies:
reading and writing sequentially: performing sequential read and write operations in the HDD;
random reading: loading the data from the HDD to the SDD according to the increase of the data heat;
random writing: when data are frequently written and the heat degree is increased, loading the data from the HDD or the SSD into the NVMe;
communicating with a cache server of the same node in a memory sharing mode, and performing shared access on node-level metadata;
and interacting with the upper application, providing a read-write interface with cache data and realizing cache data access logic.
Example 3
Example 3 is a modification of example 1 and/or example 2
The method comprises the steps that storage devices such as an HDD (hard disk drive), an SSD (solid state drive), an NVMe (network video recorder) and the like are managed in a unified mode at a distributed storage client, cache data loading is realized through a cache client when data are read, cache write operation of synchronous write-in or asynchronous write-in is carried out through a write strategy when the data are written, the read-write operation is classified through a file operation filter, the problems of distribution query, consistency and the like of the data are guaranteed by using cache data metadata, and high-availability deployment of a main-standby mode is adopted for metadata service.
Those skilled in the art will appreciate that, in addition to implementing the systems, apparatus, and various modules thereof provided by the present invention in purely computer readable program code, the same procedures can be implemented entirely by logically programming method steps such that the systems, apparatus, and various modules thereof are provided in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system, the device and the modules thereof provided by the present invention can be considered as a hardware component, and the modules included in the system, the device and the modules thereof for implementing various programs can also be considered as structures in the hardware component; modules for performing various functions may also be considered to be both software programs for performing the methods and structures within hardware components.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.
Claims (10)
1. A multi-client cache system supporting distributed storage, comprising:
module M1: the cache client synchronously loads cache data to local cache equipment and generates a corresponding cache metadata file to submit to a cache server;
module M2: the cache server side performs unified management on the cache metadata files, and provides metadata services of centralized state monitoring, configuration management and global cache data positioning for the whole distributed storage system.
2. The multi-client cache system supporting distributed storage according to claim 1, wherein said module M1 comprises: and establishing a cache metadata file for the cache data in a key-value pair mode.
3. The multi-client cache system supporting distributed storage according to claim 1, wherein said module M1 comprises: the cache client interacts with the cache server in the same node in a content sharing mode, and also interacts with the cache servers on other nodes through a socket.
4. The multi-client cache system supporting distributed storage according to claim 1, wherein the metadata file comprises a data distribution of cache data, a data copy, a data content checksum data version.
5. The multi-client cache system supporting distributed storage according to claim 1, wherein the module M2 comprises:
module M2.1: the cache server writes, reads, updates and deletes the metadata file to the cache metadata file,
module M2.2: and the cache server performs in-node cache data distribution, cache selection and replacement on the cache metadata file.
Module M2.3: the cache server continuously monitors the preset port to provide response service for cross-node requests from other cache clients.
6. The multi-client cache system supporting distributed storage according to claim 5, wherein the module M2.1 comprises: when the file is read and written through the posix mode, specific file operation is captured through the read and write filter.
7. The multi-client cache system supporting distributed storage according to claim 5, wherein the module M2.1 comprises: the writing of the metadata file comprises performing distributed cache writing operation by using SSD equipment;
the reading of the metadata file comprises: when the client accesses the local cache data, the client establishes communication with the cache metadata file, reads the metadata information of the cache data and queries the cache data;
and reading the metadata file by using an HDD and NVMe storage equipment to read the cache data.
8. The multi-client cache system supporting distributed storage according to claim 5, wherein the module M2.2 comprises: when the residual capacity space of the cache space is enough, selecting cache data to be added into the cache; when the cache space is insufficient, selecting partial data from the cache data according to the access frequency for replacement; according to the increase of the data access frequency, the cache data is migrated from the low-speed HDD to the high-speed SSD device, and the subsequent access speed is improved.
9. The multi-client caching system that supports distributed storage according to claim 1, wherein said caching metadata file comprises: the method comprises the steps that cache metadata files are deployed on metadata nodes, the cache metadata files adopt a main/standby mode to construct a metadata service cluster, the main metadata nodes provide metadata services for the outside, and when a main node is down or a network is abnormal, the standby nodes take over and continuously provide the services for the outside;
the metadata service cluster constructed in the main and standby mode realizes metadata synchronization based on a time check point and heartbeat, the consistency of the metadata of the main and standby nodes is checked through the heartbeat interval time and a fixed time check point between the main and standby metadata nodes, and when the metadata information is inconsistent, the metadata of the standby node is updated.
10. A multi-client caching method supporting distributed storage is characterized by comprising the following steps:
step M1: the cache client synchronously loads cache data to local cache equipment and generates a corresponding cache metadata file to submit to a cache server;
step M2: the cache server side performs unified management on the cache metadata files, and provides metadata services of centralized state monitoring, configuration management and global cache data positioning for the whole distributed storage system.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010780003.7A CN111984191A (en) | 2020-08-05 | 2020-08-05 | Multi-client caching method and system supporting distributed storage |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010780003.7A CN111984191A (en) | 2020-08-05 | 2020-08-05 | Multi-client caching method and system supporting distributed storage |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111984191A true CN111984191A (en) | 2020-11-24 |
Family
ID=73445091
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010780003.7A Pending CN111984191A (en) | 2020-08-05 | 2020-08-05 | Multi-client caching method and system supporting distributed storage |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111984191A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112468601A (en) * | 2021-02-03 | 2021-03-09 | 柏科数据技术(深圳)股份有限公司 | Data synchronization method, access method and system of distributed storage system |
CN113535094A (en) * | 2021-08-06 | 2021-10-22 | 上海德拓信息技术股份有限公司 | Cross-platform client implementation method based on distributed storage |
CN114461148A (en) * | 2022-01-26 | 2022-05-10 | 北京金山云网络技术有限公司 | Object storage method, device and system, electronic equipment and storage medium |
CN115098045A (en) * | 2022-08-23 | 2022-09-23 | 成都止观互娱科技有限公司 | Data storage system and network data reading and writing method |
CN116048425A (en) * | 2023-03-09 | 2023-05-02 | 浪潮电子信息产业股份有限公司 | Hierarchical caching method, hierarchical caching system and related components |
CN117193670A (en) * | 2023-11-06 | 2023-12-08 | 之江实验室 | Method and device for clearing cache, storage medium and electronic equipment |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102014158A (en) * | 2010-11-29 | 2011-04-13 | 北京兴宇中科科技开发股份有限公司 | Cloud storage service client high-efficiency fine-granularity data caching system and method |
CN102541983A (en) * | 2011-10-25 | 2012-07-04 | 无锡城市云计算中心有限公司 | Method for synchronously caching by multiple clients in distributed file system |
US20140344391A1 (en) * | 2012-12-13 | 2014-11-20 | Level 3 Communications, Llc | Content Delivery Framework having Storage Services |
CN104239572A (en) * | 2014-09-30 | 2014-12-24 | 普元信息技术股份有限公司 | System and method for achieving metadata analysis based on distributed cache |
CN104317736A (en) * | 2014-09-28 | 2015-01-28 | 曙光信息产业股份有限公司 | Method for implementing multi-level caches in distributed file system |
CN104469391A (en) * | 2014-11-21 | 2015-03-25 | 深圳市天威视讯股份有限公司 | Cloud-platform-based digital television content distribution system and method |
CN105930519A (en) * | 2016-05-23 | 2016-09-07 | 浪潮电子信息产业股份有限公司 | Globally shared read caching method based on cluster file system |
CN106648464A (en) * | 2016-12-22 | 2017-05-10 | 柏域信息科技(上海)有限公司 | Multi-node mixed block cache data read-writing method and system based on cloud storage |
-
2020
- 2020-08-05 CN CN202010780003.7A patent/CN111984191A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102014158A (en) * | 2010-11-29 | 2011-04-13 | 北京兴宇中科科技开发股份有限公司 | Cloud storage service client high-efficiency fine-granularity data caching system and method |
CN102541983A (en) * | 2011-10-25 | 2012-07-04 | 无锡城市云计算中心有限公司 | Method for synchronously caching by multiple clients in distributed file system |
US20140344391A1 (en) * | 2012-12-13 | 2014-11-20 | Level 3 Communications, Llc | Content Delivery Framework having Storage Services |
CN104317736A (en) * | 2014-09-28 | 2015-01-28 | 曙光信息产业股份有限公司 | Method for implementing multi-level caches in distributed file system |
CN104239572A (en) * | 2014-09-30 | 2014-12-24 | 普元信息技术股份有限公司 | System and method for achieving metadata analysis based on distributed cache |
CN104469391A (en) * | 2014-11-21 | 2015-03-25 | 深圳市天威视讯股份有限公司 | Cloud-platform-based digital television content distribution system and method |
CN105930519A (en) * | 2016-05-23 | 2016-09-07 | 浪潮电子信息产业股份有限公司 | Globally shared read caching method based on cluster file system |
CN106648464A (en) * | 2016-12-22 | 2017-05-10 | 柏域信息科技(上海)有限公司 | Multi-node mixed block cache data read-writing method and system based on cloud storage |
Non-Patent Citations (1)
Title |
---|
钟运琴;方金云;赵晓芳;: "大规模时空数据分布式存储方法研究", 高技术通讯, no. 12, pages 11 - 21 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112468601A (en) * | 2021-02-03 | 2021-03-09 | 柏科数据技术(深圳)股份有限公司 | Data synchronization method, access method and system of distributed storage system |
CN112468601B (en) * | 2021-02-03 | 2021-05-18 | 柏科数据技术(深圳)股份有限公司 | Data synchronization method, access method and system of distributed storage system |
CN113535094A (en) * | 2021-08-06 | 2021-10-22 | 上海德拓信息技术股份有限公司 | Cross-platform client implementation method based on distributed storage |
CN114461148A (en) * | 2022-01-26 | 2022-05-10 | 北京金山云网络技术有限公司 | Object storage method, device and system, electronic equipment and storage medium |
CN115098045A (en) * | 2022-08-23 | 2022-09-23 | 成都止观互娱科技有限公司 | Data storage system and network data reading and writing method |
CN115098045B (en) * | 2022-08-23 | 2022-11-25 | 成都止观互娱科技有限公司 | Data storage system and network data reading and writing method |
CN116048425A (en) * | 2023-03-09 | 2023-05-02 | 浪潮电子信息产业股份有限公司 | Hierarchical caching method, hierarchical caching system and related components |
CN117193670A (en) * | 2023-11-06 | 2023-12-08 | 之江实验室 | Method and device for clearing cache, storage medium and electronic equipment |
CN117193670B (en) * | 2023-11-06 | 2024-01-30 | 之江实验室 | Method and device for clearing cache, storage medium and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11153380B2 (en) | Continuous backup of data in a distributed data store | |
US11755415B2 (en) | Variable data replication for storage implementing data backup | |
CN111984191A (en) | Multi-client caching method and system supporting distributed storage | |
JP6522812B2 (en) | Fast Crash Recovery for Distributed Database Systems | |
EP2176756B1 (en) | File system mounting in a clustered file system | |
JP6538780B2 (en) | System-wide checkpoint avoidance for distributed database systems | |
CN111124301B (en) | Data consistency storage method and system of object storage device | |
US20190188406A1 (en) | Dynamic quorum membership changes | |
JP5330503B2 (en) | Optimize storage performance | |
US9424140B1 (en) | Providing data volume recovery access in a distributed data store to multiple recovery agents | |
US20170024315A1 (en) | Efficient garbage collection for a log-structured data store | |
US20120084529A1 (en) | Arrangements for managing metadata of an integrated logical unit including differing types of storage media | |
CN107832423B (en) | File reading and writing method for distributed file system | |
CN108763436A (en) | A kind of distributed data-storage system based on ElasticSearch and HBase | |
CN103516549B (en) | A kind of file system metadata log mechanism based on shared object storage | |
US10885023B1 (en) | Asynchronous processing for synchronous requests in a database | |
CN103501319A (en) | Low-delay distributed storage system for small files | |
US10803012B1 (en) | Variable data replication for storage systems implementing quorum-based durability schemes | |
CN113377868A (en) | Offline storage system based on distributed KV database | |
CN113553325A (en) | Synchronization method and system for aggregation objects in object storage system | |
CN112463073A (en) | Object storage distributed quota method, system, equipment and storage medium | |
CN114840148B (en) | Method for realizing disk acceleration based on linux kernel bcache technology in Kubernets | |
CN113190523B (en) | Distributed file system, method and client based on multi-client cooperation | |
Zhou | Large scale distributed file system survey | |
Shrivastava | Hadoop-CC (Collaborative Caching) in Real Time HDFS Thesis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |