CN106850856A - A kind of distributed memory system and its synchronization caching method - Google Patents

A kind of distributed memory system and its synchronization caching method Download PDF

Info

Publication number
CN106850856A
CN106850856A CN201710190374.8A CN201710190374A CN106850856A CN 106850856 A CN106850856 A CN 106850856A CN 201710190374 A CN201710190374 A CN 201710190374A CN 106850856 A CN106850856 A CN 106850856A
Authority
CN
China
Prior art keywords
client
data
caching
cache
buffer area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710190374.8A
Other languages
Chinese (zh)
Inventor
金友兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Zhuo Shengyun Mdt Infotech Ltd
Original Assignee
Nanjing Zhuo Shengyun Mdt Infotech Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Zhuo Shengyun Mdt Infotech Ltd filed Critical Nanjing Zhuo Shengyun Mdt Infotech Ltd
Priority to CN201710190374.8A priority Critical patent/CN106850856A/en
Publication of CN106850856A publication Critical patent/CN106850856A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching

Abstract

The invention discloses a kind of distributed memory system, including several clients, metadata node, monitor node and several memory nodes for being connected with the network switch, cache synchronization module is provided with described each client, caching record module is provided with described each memory node;A kind of synchronization caching method of distributed memory system, comprises the following steps that:Cache synchronization module is set in each client, on several memory nodes of distributed storage service end, each memory node sets cache synchronization module, is intercepted when data are written and read operation, and in centre insertion caching process, data are read out in client finally.The present invention can improve the hit rate of local client caching, and ensure the uniformity of data, finally improve the reading performance in distributed memory system.

Description

A kind of distributed memory system and its synchronization caching method
Technical field
The invention belongs to technical field of data storage, more particularly to a kind of distributed memory system and its synchronization caching side Method.
Background technology
As shown in Figure 1 in existing distributed storage cluster, service end includes:Metadata node, monitor node, storage Node.But largest is memory node, the data of user are preserved, and directly provide clients with the branch of read-write requests Hold.Monitor node (some systems are also management node) is mainly the state for recording all service ends and client, this node Can be communicated with other nodes within often several seconds excessively, determine whether each service end and client are normal.
In order to improve the readwrite performance of system, client and server memory node may all provide data buffer storage, to add The speed of fast read-write.But client realizes that read-write cache will bring more complicated problem in a distributed system.Such as visitor Family end is when data are write, if be cached to locally, now the machine of delaying or service error occurs in client, it will produce data to lose Lose.Client is when data are read, if directly reading data from local cache, but possible other clients have updated phase before Data are closed, and local read buffer can not be obtained, and be at this moment legacy data by what is obtained.
So in a distributed system, client-side program does not provide data buffer storage typically, using synchronous processing method, and The scheme of data buffer storage is only provided in service end memory node.But if a reliable data can be provided in client Caching, can significantly improve storage performance.There is the possibility of loss of data due to the write buffer of client, the present invention is For accelerate client reading performance to design.
The content of the invention
Goal of the invention:For problems of the prior art, the present invention provides one kind by service end memory node On record the status information of each client-cache, when service end carries out data and updates, notify that relative clients end is discarded local Cache information, the final reading performance for realizing accelerating client, and the Multipoint synchronous mode by caching, it is to avoid client reads data When obtain old data cached, finally realize a kind of reliable distributed memory system and its synchronization caching method.
Technical scheme:In order to solve the above technical problems, the present invention provide a kind of distributed memory system, including with Several clients, metadata node, monitor node and several memory nodes that the network switch is connected, described each client Cache synchronization module is provided with end, caching record module is provided with described each memory node;
Memory node:Data for preserving user, and directly provide clients with the support of read-write requests;
Monitor node:State for recording all service ends and client.
A kind of synchronization caching method of distributed memory system as described above, comprises the following steps that:In each client Upper setting cache synchronization module, on several memory nodes of distributed storage service end, each memory node sets caching Synchronization module, is intercepted when data are written and read operation, and in centre insertion caching process, finally in client to data It is read out.
Further, the specific method of caching process when carrying out write operation to data is:
Step one:File writing module receives write request first, is then smaller object file division, and to object Carry out exclusive number:ID;
Step 2:Cache synchronization module starts and sets up a certain size buffer area;Then the right of each write request is preserved The numbering of image data content and the object;Whether capacity has completely judged finally is accounted for buffer area, when appearance shared by buffer area Amount removes oldest target cache when being judged as having expired;When capacity shared by buffer area be judged as less than when carry out step 3 operation protect Deposit object includes that numbering and content juxtaposition object are " transitory state " to local cache
Step 3:After client local cache preserves data, to server end sending object content, object ID and client The ID of itself, and this data cached is designated " transitory state ";
Step 4:Service end receive data after, in conservation object content to permanent storage media, then upgating object ID and During client id is recorded to services cache;
Step 5:Judge for a plurality of record in caching record module with the presence or absence of the object ID, if do not deposited Then notifying client write operation to complete and remove object " transitory state " and final write operation terminates, if there is then being walked Rapid six;
Step 6:The object ID and client id are traveled through, the cache invalidation of other clients object ID is then notified, and Asynchronous wait returning result;If it is asynchronous have result if return to asynchronous result and terminate, if asynchronous coming to nothing enters step Seven;
Step 7:Monitor node records the client entirety cache invalidation without asynchronous returning result first, then client End timed communication task triggering cache synchronization module removes all cachings of the client.
Further, the specific method of caching process when carrying out read operation to data is:
Step 1:File read through model receives file read request, and then whether local cache is sentenced including object data It is disconnected, back read data is returned if comprising object data to requestor, read operation process terminates;If object data is not included Into step 2:
Step 2:Server receives read request and object data is obtained from service end, then upgating object ID and client During ID is recorded to services cache, and the data object result for reading is returned to client.Whether client judges local cache It has been expired that, remove oldest target cache and enter step 3 if caching expire, if it is determined that caching less than being then directly entered Step 3:
Step 3:Conservation object data are to local cache, including numbering and content and finally return that reading data, read operation Journey terminates.
Further, whether buffer area accounts for capacity in the step of caching process when carrying out write operation to data two Full criterion is to judge whether the space that buffer area is used is more than certain ratio (such as 90 percent), if greatly The capacity that accounts for for then judging buffer area in the ratio has been expired, if be not more than the ratio if judge buffer area account for capacity less than.
Further, cached in the step of caching process when carrying out read operation to data 2 whether full judgement mark Standard is to judge whether the space that buffer area is used is more than certain ratio (such as 90 percent), if greater than 9 percent Ten judge caching expire, if no more than 90 percent judge cache less than.
Compared with prior art, the advantage of the invention is that:
In whole flow process, the client machine of delaying is the cache invalidation equivalent to this node to the present invention, without influence data Uniformity.If there is machine of delaying, the situation for increasing or reducing in the memory node of server end, then be notified that all clients All local cache datas are discarded at end, it is ensured that the uniformity of data.Therefore, for reading the application services for writing few Web etc more, During whole operation, the burden of writing of server is increased to a certain degree, but can be substantially to accelerate read request performance and reaction Speed.
The present invention can improve the hit rate of local client caching, and ensure the uniformity of data, finally improve point Reading performance in cloth storage system.
Brief description of the drawings
Fig. 1 is the structural representation of distributed memory system in the prior art;
Fig. 2 is structural representation of the invention;
Fig. 3 is the overall flow figure that the present invention carries out caching process during write operation to data;
Fig. 4 is the overall flow figure that the present invention carries out caching process during read operation to data.
Specific embodiment
With reference to the accompanying drawings and detailed description, the present invention is furture elucidated.
The invention provides a kind of caching by multiple clients and the synchronous method of service end, by service end memory node The status information of each client-cache is recorded, when service end carries out data renewal, notifies that relative clients end is discarded local slow Information is deposited, the final reading performance for realizing accelerating client.The granularity cached in the present invention is the object in units of an object Can be a complete file, or a part for big file.Visitor finally avoid by the method for synchronization of this caching Read the unreliable problem of legacy data cached during data in family end.
System architecture of the invention such as accompanying drawing 2.Service end has multiple memory nodes in this distributed storage, and each is deposited Storage node increases by one " the caching record module " of present invention design;Client also has multiple, and the present invention is increased in client " the cache synchronization module " of design.The position that service end preserves data is determined by distributed memory system itself, may be distributed to On any memory node.Flow of the invention is the read-write flow of data intercept, in the treatment of centre insertion caching.For The method for preserving each target cache data is:
1. when " cache synchronization module " starts, a certain size buffer area is set up.
2. " cache synchronization module " preserves the object data content of each write request and the numbering of the object.When buffer area is fast When full (such as about 90% uses), then it is dirty data to use conventional algorithm to put oldest content.
3. after client local cache preserves data, to server end sending object content, object ID and client itself ID, but temporarily put this it is data cached be " transitory state " mark.
4. after service end receives data, in conservation object content to permanent storage media, and object ID and client certainly The ID of body is preserved as a record of " caching record module ".Here same object ID, can have a plurality of client id to remember Record, because multiple client may all cache the object.
5. for be not present in record this client initiate other clients, server to these clients send out The notice for sending object ID caching record to fail, but asynchronous wait returning result.
6. after all cache invalidations notify to distribute, result is not to wait for, directly notifies that initiation write request client is removed this and delayed " transitory state " mark deposited, terminates this write request.
7. for the module of the asynchronous returning result of wait, the asynchronous end if returning result is found within a certain period of time. Still come to nothing after otherwise repeatedly retrying, then notify monitor node (or management node) the client global failure.
8. for receiving the client of object ID cache invalidation notice, then this data cached is set to dirty data.
9. client timed task constantly communicates with monitor node (or management node), it is found that client global failure leads to After knowing, local client can put all local caches for dirty data.
For the method for client reading object data is:
1. client searches local cache with the presence or absence of the object data and object number, and is not dirty data.
2. if it does, and do not have " transitory state " indicate, illustrate data effectively, then be directly returned to requestor, reading Object Process terminates.
3. if it does not, rear end server initiates read request.
4. server end obtains object data from itself storage, and addition or renewal should in " caching record module " Object ID and client id corresponding relation are recorded.
5. then data are returned to client by server end.
6. client can finally be provided to requestor equally current data buffer storage to local, reads Object Process and terminates.When When buffer area is expired soon (such as about 90% uses), then it is dirty data to use conventional algorithm to put oldest content.
Wherein, dirty data is set in top-operation, shows that the segment data is invalid, can covered.It is actual quite with remove number According to, but it is very quick.In whole flow process, the client machine of delaying is the cache invalidation equivalent to this node, without influence number According to uniformity.If there is machine of delaying, the situation for increasing or reducing in the memory node of server end, then be notified that all visitors All local cache datas are discarded at family end, it is ensured that the uniformity of data.As for service end memory node delay machine or data are lost Mistake is ensured by the method for many copies of service end, is not belonging to scope.
From the foregoing, for reading the application services for writing few Web etc, during whole operation, increasing to a certain degree more The burden of writing of server is added, but can be substantially to accelerate read request performance and reaction speed.
In the present invention in distributed memory system, client deployment cache synchronization module, in the memory node of service end Deployment caching record module, the flow of whole read-write is shown in accompanying drawing 3 and 4, and the method write request flow includes:
1. the memory node of service end stores user data in units of object, and an object is less than 4M (configurable) greatly Small user data, can be a complete file, or a part for big file, or one the one of long data block Part.That is for the file less than 4M, complete file is an object;For the file more than 4M, in units of 4M Cutting;For not being the data block in units of file, such as one simulation block device, is also to be cut in units of 4M.
2. when client " cache synchronization module " starts, a certain size buffer area is set up, buffer area can be stored in interior Deposit or the high-speed Medium such as SSD in, without influenceing its function.
3. " cache synchronization module " first determines whether whether buffer area is full (such as about 90% uses) soon.If it is adopt It is dirty data to put oldest cache object with conventional algorithm.Afterwards, this conservation object content and correlation number are saved in this Ground buffer area, this is data cached for " transitory state " indicates for juxtaposition, represents that data not can use finally also.
4. client sending object content, object ID and the client ID of itself.
5. after service end receives data, in conservation object content to permanent storage media, and in " caching record module " Upgating object ID and client id application relation record.This record is stored in internal memory, it may not be necessary to permanently stored.
6. " caching record module " searches all records with object ID as key assignments, may have a plurality of here, such as:
Object ID, the ID1 of client 1
Object ID, the ID2 of client 2
This shows that this object ID has caching in multiple client.The content of these numberings can be set with various ways Put, as long as ensureing uniqueness.
7. " caching record module " traverse object ID records, for other clients for not being the initiation of this client, take Business device to these clients initiate object ID cache invalidation notice, but asynchronous wait returning result.
8. after all cache invalidations notify to distribute, return to this write request and terminate to client.After client has notice " transitory state " mark of the caching is removed, terminates this write request.
9., for waiting asynchronous returning result module, returning result then asynchronous end is had within a certain period of time.Else if Repeatedly (generally 3 times) retry send object ID cache invalidation notice, still come to nothing, then notify monitor node (or pipe Reason node) the client global failure.
10. client receives object ID cache invalidation notice, then this data cached is set to dirty data.
11. client timed tasks constantly communicate with monitor node (or management node), find client global failure After notice, local client can put all local caches for dirty data.
Include for read request flow:
1. when client " cache synchronization module " receives data read request, first determine whether local cache with the presence or absence of the number According to and reference numeral.
2. if it does, and there is no " transitory state " to indicate that local data of directly taking away returns to requestor, read request Terminate.This point can greatly accelerate the process for reading data.
3. if it does not, moving towards back-end server initiates read request.
4. server end, after receiving request, obtains object data, and add in " caching record module " in itself storage Plus or upgating object ID and client id record.
5. then data are returned to client by server end.
6. client " cache synchronization module " is general judges whether buffer area is full (such as about 90% uses) soon.If Be then removed using conventional algorithm it is oldest data cached.Then this read request data is cached, finally the contents of object is returned Back to requestor, read request terminates.
Therefore, the invention provides a kind of multi-client read buffer and the synchronous method of service end.The method can be improved The hit rate of local client caching, and ensure the uniformity of data, finally improve the reading performance in distributed memory system.
Embodiments of the invention is the foregoing is only, is not intended to limit the invention.It is all in principle of the invention Within, the equivalent made should be included within the scope of the present invention.The content category that the present invention is not elaborated In prior art known to this professional domain technical staff.

Claims (6)

1. a kind of distributed memory system, it is characterised in that including several clients, the metadata that are connected with the network switch Node, monitor node and several memory nodes, are provided with cache synchronization module in described each client, it is described each deposit Caching record module is provided with storage node;
Memory node:Data for preserving user, and directly provide clients with the support of read-write requests;
Monitor node:State for recording all service ends and client.
2. a kind of synchronization caching method of distributed memory system as claimed in claim 1, it is characterised in that specific steps are such as Under:Cache synchronization module is set in each client, and on several memory nodes of distributed storage service end, each is deposited Storage node sets cache synchronization module, is intercepted when data are written and read operation, and inserts caching process in centre, finally Data are written and read in client.
3. a kind of synchronization caching method of distributed memory system according to claim 2, it is characterised in that the logarithm Specific method according to caching process when carrying out write operation is:
Step one:File writing module receives write request first, is then smaller object file division, and carried out to object Exclusive number:ID;
Step 2:Cache synchronization module starts and sets up a certain size buffer area;Then the number of objects of each write request is preserved According to content and the numbering of the object;Whether capacity has completely judged finally is accounted for buffer area, when capacity is sentenced shared by buffer area Oldest target cache is removed when breaking to have expired;When capacity shared by buffer area be judged as less than when carry out step 3 operation conservation object Include that numbering and content juxtaposition object are " transitory state " to local cache
Step 3:After client local cache preserves data, to server end sending object content, object ID and client itself ID, and this data cached is designated " transitory state ";
Step 4:After service end receives data, in conservation object content to permanent storage media, then upgating object ID and client During end ID is recorded to services cache;
Step 5:Judge for a plurality of record in caching record module with the presence or absence of the object ID, if there is no then Notify client write operation to complete and remove object " transitory state " and final write operation terminates, if there is then carrying out step Six;
Step 6:The object ID and client id are traveled through, the cache invalidation of other clients object ID is then notified, and it is asynchronous Wait returning result;If it is asynchronous have result if return to asynchronous result and terminate, if asynchronous coming to nothing enters step 7;
Step 7:Monitor node records the client entirety cache invalidation without asynchronous returning result first, and then client is determined When communication task triggering cache synchronization module remove all cachings of the client.
4. a kind of synchronization caching method of distributed memory system according to claim 2, it is characterised in that the logarithm Specific method according to caching process when carrying out read operation is:
Step 1:File read through model receives file read request, then whether local cache is judged including object data, such as Fruit then returns back read data to requestor comprising object data, and read operation process terminates;Enter if object data is not included Step 2:
Step 2:Server receives read request and object data is obtained from service end, and then upgating object ID and client id are arrived In services cache record, and the data object result for reading is returned to client;Client judges whether local cache has expired, Remove oldest target cache and enter step 3 if caching expire, if it is determined that caching less than being then directly entered step Three:
Step 3:Conservation object data are to local cache, including numbering and content and finally return that reading data, read operation process knot Beam.
5. a kind of synchronization caching method of distributed memory system according to claim 3, it is characterised in that the logarithm According to when carrying out write operation two the step of caching process in the capacity that accounts for of buffer area whether full criterion is to judge buffer area Whether the space for being used is more than 90 percent, judges that the capacity that accounts for of buffer area has been expired if greater than 90 percent, such as Fruit be not more than 90 percent judge buffer area account for capacity less than.
6. a kind of synchronization caching method of distributed memory system according to claim 4, it is characterised in that the logarithm According to when carrying out read operation 2 the step of caching process in cache whether full criterion is to judge the sky that buffer area is used Between whether be more than 90 percent, if greater than 90 percent judge caching expired, if no more than 90 percent Judge caching less than.
CN201710190374.8A 2017-03-28 2017-03-28 A kind of distributed memory system and its synchronization caching method Pending CN106850856A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710190374.8A CN106850856A (en) 2017-03-28 2017-03-28 A kind of distributed memory system and its synchronization caching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710190374.8A CN106850856A (en) 2017-03-28 2017-03-28 A kind of distributed memory system and its synchronization caching method

Publications (1)

Publication Number Publication Date
CN106850856A true CN106850856A (en) 2017-06-13

Family

ID=59130719

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710190374.8A Pending CN106850856A (en) 2017-03-28 2017-03-28 A kind of distributed memory system and its synchronization caching method

Country Status (1)

Country Link
CN (1) CN106850856A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107608627A (en) * 2017-08-21 2018-01-19 云宏信息科技股份有限公司 A kind of remote data classification storage method, electronic equipment and storage medium
CN107643988A (en) * 2017-09-15 2018-01-30 郑州云海信息技术有限公司 A kind of storage system and storage method with failover mechanism
CN108762673A (en) * 2018-05-24 2018-11-06 浪潮电子信息产业股份有限公司 A kind of remote data access processing system
WO2019048969A1 (en) * 2017-09-05 2019-03-14 International Business Machines Corporation Asynchronous update of metadata tracks in response to a cache hit generated via an i/o operation over a bus interface
CN109691065A (en) * 2018-08-23 2019-04-26 袁振南 Distributed memory system and its data read-write method, storage terminal and storage medium
CN110187825A (en) * 2018-06-26 2019-08-30 西安奥卡云数据科技有限公司 The super more copies of fusion of one kind accelerate storage system
CN112764690A (en) * 2021-02-03 2021-05-07 北京同有飞骥科技股份有限公司 Distributed storage system
WO2021102673A1 (en) * 2019-11-26 2021-06-03 Citrix Systems, Inc. Document storage and management
CN114051056A (en) * 2022-01-13 2022-02-15 阿里云计算有限公司 Data caching and reading method and data access system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102136003A (en) * 2011-03-25 2011-07-27 上海交通大学 Large-scale distributed storage system
CN102541983A (en) * 2011-10-25 2012-07-04 无锡城市云计算中心有限公司 Method for synchronously caching by multiple clients in distributed file system
CN104156327A (en) * 2014-08-25 2014-11-19 曙光信息产业股份有限公司 Method for recognizing object power failure in write back mode in distributed file system
US20150026417A1 (en) * 2012-12-28 2015-01-22 Huawei Technologies Co., Ltd. Caching Method for Distributed Storage System, a Lock Server Node, and a Lock Client Node
CN104580437A (en) * 2014-12-30 2015-04-29 创新科存储技术(深圳)有限公司 Cloud storage client and high-efficiency data access method thereof
CN105549905A (en) * 2015-12-09 2016-05-04 上海理工大学 Method for multiple virtual machines to access distributed object storage system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102136003A (en) * 2011-03-25 2011-07-27 上海交通大学 Large-scale distributed storage system
CN102541983A (en) * 2011-10-25 2012-07-04 无锡城市云计算中心有限公司 Method for synchronously caching by multiple clients in distributed file system
US20150026417A1 (en) * 2012-12-28 2015-01-22 Huawei Technologies Co., Ltd. Caching Method for Distributed Storage System, a Lock Server Node, and a Lock Client Node
CN104156327A (en) * 2014-08-25 2014-11-19 曙光信息产业股份有限公司 Method for recognizing object power failure in write back mode in distributed file system
CN104580437A (en) * 2014-12-30 2015-04-29 创新科存储技术(深圳)有限公司 Cloud storage client and high-efficiency data access method thereof
CN105549905A (en) * 2015-12-09 2016-05-04 上海理工大学 Method for multiple virtual machines to access distributed object storage system

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107608627A (en) * 2017-08-21 2018-01-19 云宏信息科技股份有限公司 A kind of remote data classification storage method, electronic equipment and storage medium
CN107608627B (en) * 2017-08-21 2020-10-02 云宏信息科技股份有限公司 Remote data hierarchical storage method, electronic equipment and storage medium
US11010295B2 (en) 2017-09-05 2021-05-18 International Business Machines Corporation Asynchronous update of metadata tracks in response to a cache hit generated via an i/o operation over a bus interface
WO2019048969A1 (en) * 2017-09-05 2019-03-14 International Business Machines Corporation Asynchronous update of metadata tracks in response to a cache hit generated via an i/o operation over a bus interface
US10565109B2 (en) 2017-09-05 2020-02-18 International Business Machines Corporation Asynchronous update of metadata tracks in response to a cache hit generated via an I/O operation over a bus interface
GB2579754A (en) * 2017-09-05 2020-07-01 Ibm Asynchronous update of metadata tracks in response to a cache hit generated via an I/O operation over a bus interface
GB2579754B (en) * 2017-09-05 2020-12-02 Ibm Asynchronous update of metadata tracks in response to a cache hit generated via an I/O operation over a bus interface
CN107643988A (en) * 2017-09-15 2018-01-30 郑州云海信息技术有限公司 A kind of storage system and storage method with failover mechanism
CN108762673A (en) * 2018-05-24 2018-11-06 浪潮电子信息产业股份有限公司 A kind of remote data access processing system
CN110187825A (en) * 2018-06-26 2019-08-30 西安奥卡云数据科技有限公司 The super more copies of fusion of one kind accelerate storage system
CN109691065A (en) * 2018-08-23 2019-04-26 袁振南 Distributed memory system and its data read-write method, storage terminal and storage medium
CN109691065B (en) * 2018-08-23 2021-11-09 袁振南 Distributed storage system and data read-write method thereof, storage terminal and storage medium
WO2021102673A1 (en) * 2019-11-26 2021-06-03 Citrix Systems, Inc. Document storage and management
US11580148B2 (en) 2019-11-26 2023-02-14 Citrix Systems, Inc. Document storage and management
CN112764690A (en) * 2021-02-03 2021-05-07 北京同有飞骥科技股份有限公司 Distributed storage system
CN114051056A (en) * 2022-01-13 2022-02-15 阿里云计算有限公司 Data caching and reading method and data access system

Similar Documents

Publication Publication Date Title
CN106850856A (en) A kind of distributed memory system and its synchronization caching method
US10469577B2 (en) Caching method and system based on cache cluster
CN103885895B (en) Write performance in fault-tolerant cluster storage system
US10268719B2 (en) Granular buffering of metadata changes for journaling file systems
AU2002335503B2 (en) Disk writes in a distributed shared disk system
CN103942252B (en) A kind of method and system for recovering data
US20060136472A1 (en) Achieving cache consistency while allowing concurrent changes to metadata
CN103186554B (en) Distributed data mirror method and storage back end
CN105549905A (en) Method for multiple virtual machines to access distributed object storage system
US20020032671A1 (en) File system and file caching method in the same
CN105635196B (en) A kind of method, system and application server obtaining file data
WO2018137327A1 (en) Data transmission method for host and standby devices, control node, and database system
CN105224255B (en) A kind of storage file management method and device
CN106527974B (en) A kind of method that writing data, equipment and system
CN105701219B (en) A kind of implementation method of distributed caching
JPH0827755B2 (en) How to access data units at high speed
CN103530388A (en) Performance improving data processing method in cloud storage system
CN101286127A (en) Multi-fork diary memory continuous data protecting and restoration method
CN110399348A (en) File deletes method, apparatus, system and computer readable storage medium again
CN111782612A (en) File data edge caching method in cross-domain virtual data space
US10387308B2 (en) Method and apparatus for online reducing caching devices
CN107046575A (en) A kind of cloud storage system and its high density storage method
CN107329859A (en) A kind of data guard method and storage device
CN100394404C (en) System and method for management of metadata
AU2002248570B2 (en) Managing checkpoint queues in a multiple node system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170613