WO2018153202A1 - Data caching method and apparatus - Google Patents
Data caching method and apparatus Download PDFInfo
- Publication number
- WO2018153202A1 WO2018153202A1 PCT/CN2018/073965 CN2018073965W WO2018153202A1 WO 2018153202 A1 WO2018153202 A1 WO 2018153202A1 CN 2018073965 W CN2018073965 W CN 2018073965W WO 2018153202 A1 WO2018153202 A1 WO 2018153202A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- cache
- cache device
- target access
- preset
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0806—Multiuser, multiprocessor or multiprocessing cache systems
- G06F12/0811—Multiuser, multiprocessor or multiprocessing cache systems with multilevel cache hierarchies
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
- G06F12/023—Free address space management
- G06F12/0238—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
- G06F12/0868—Data transfer between cache memory and other subsystems, e.g. storage devices or host systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0891—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches using clearing, invalidating or resetting means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/12—Replacement control
- G06F12/121—Replacement control using replacement algorithms
- G06F12/126—Replacement control using replacement algorithms with special data handling, e.g. priority of data or instructions, handling errors or pinning
- G06F12/127—Replacement control using replacement algorithms with special data handling, e.g. priority of data or instructions, handling errors or pinning using additional replacement algorithms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0659—Command handling arrangements, e.g. command buffers, queues, command scheduling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/568—Storing data temporarily at an intermediate stage, e.g. caching
- H04L67/5682—Policies or rules for updating, deleting or replacing the stored data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/10—Providing a specific technical effect
- G06F2212/1016—Performance improvement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/15—Use in a specific computing environment
- G06F2212/154—Networked environment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/22—Employing cache memory using specific memory technology
- G06F2212/222—Non-volatile memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/26—Using a specific storage system architecture
- G06F2212/261—Storage comprising a plurality of storage devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/28—Using a specific disk cache architecture
- G06F2212/283—Plural cache memories
- G06F2212/284—Plural cache memories being distributed
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/31—Providing disk cache in a specific location of a storage system
- G06F2212/314—In storage network, e.g. network attached cache
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/40—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection
Definitions
- the present invention relates to the field of data storage technologies, and in particular, to a data caching method and a data caching device.
- a distributed storage system is composed of a plurality of storage devices, a cache device, and an input/output (I/O) bus, and each storage device performs data transmission through an I/O bus, and is based on data between the storage devices.
- I/O input/output
- Decentralized layout enables efficient and inexpensive data storage, and distributed storage systems are widely used in intensive computing and cloud computing due to their powerful scalability.
- the conventional data caching method applied to a distributed storage system mainly adopts an on-demand policy, that is, if it is detected that there is no data required by the user in the cache device, the data stored in the storage device is loaded. Go to the cache device in response to the needs of the user terminal.
- the data transferred into the cache device during the above response process replaces other data of the cache device, and there is also a problem that since the new call-in buffer device is no longer accessed in the subsequent process.
- the data or access times are small but multiple accesses to the replaced data are required, so the newly loaded data will occupy the storage resources in the cache device so that the storage resources cannot be fully utilized.
- the buffer operation on the data block requires a large amount of network bandwidth and storage read and write overhead. Therefore, the data caching method applied to the distributed storage system has a problem that the storage resource utilization rate is low.
- the invention provides a data buffering method and a data buffering device.
- a data caching method comprising the steps of: receiving a data request message sent by a user terminal; if detecting that the caching device does not include the target access data requested by the data request message, Transmitting, to the user terminal, the target access data in the storage device; extracting parameter information related to the target access data in the storage device, and determining whether the parameter information matches a preset parameter condition; And if the parameter information matches a preset parameter condition, transmitting the target access data to the cache device.
- the parameter information includes an access number; and, if the parameter information matches the preset parameter condition, transmitting the target access data to the cache device comprises: if The number of accesses is greater than or equal to a preset first threshold, and the target access data is transmitted to the cache device.
- the parameter information includes an access number and an access time
- the step of transmitting the target access data to the cache device comprises: if The target access data is transmitted to the cache device when the number of accesses is greater than or equal to a preset second threshold and the access time is within a preset period.
- the method further comprises: detecting a cache occupancy rate of the cache device; and if the cache occupancy rate is greater than or equal to a preset And a third threshold, the data of the access times in the cache device being less than or equal to a preset fourth threshold is cleared and/or the modified data in the cache device is transmitted to the storage device.
- the method further includes: redundantly backing up the cache information in the memory of the cache device to the persistent storage device of the cache device; And if it detects that a node failure or a system crash occurs in the cache device, restoring cached information in the cache device to the cache device.
- the method further comprising: if detecting that the cache device includes the target access data requested by the data request message, The user terminal transmits the target access data in the cache device.
- a computer readable medium storing program instructions for causing a computer to perform the data caching method described above is provided.
- a data cache device includes: a receiving module configured to receive a data request message sent by the user terminal; and a sending module configured to detect that the cache device does not include the Transmitting, by the data request message, the target access data, the target access data in the storage device to the user terminal; and the extracting module configured to extract parameter information related to the target access data in the storage device And determining whether the parameter information matches a preset parameter condition; and the transmitting module is configured to transmit the target access data to the cache device if the parameter information matches the preset parameter condition.
- the parameter information includes a number of accesses; and the preset parameter condition is that the number of accesses is greater than or equal to a preset second threshold.
- the parameter information includes an access number and an access time; and the preset parameter condition is: the access times are greater than or equal to a preset second threshold, and the access time is within a preset period. .
- the data caching device further includes: a detecting module configured to detect a cache occupancy rate of the caching device; and a processing module configured to: if the buffer occupancy rate is greater than or equal to a preset third threshold And clearing data in the cache device that is less than or equal to a preset fourth threshold and/or transmitting the modified data in the cache device to the storage device.
- the data caching device further includes: a backup module configured to redundantly back up cache information in the memory of the cache device to the persistent storage device of the cache device; and a recovery module configured to If it is detected that a node failure or a system crash occurs in the cache device, the cached information in the cache device is restored to the cache device.
- the sending module is further configured to: if the cache device is detected to include the target access data requested by the data request message, send the location in the cache device to the user terminal The target access data.
- FIG. 1 is a flow chart of a data caching method in accordance with an embodiment of the present invention.
- FIG. 2 is a schematic diagram of a data caching method according to an embodiment of the present invention.
- FIG. 3 is a schematic diagram of step S102 in a data caching method according to an embodiment of the present invention.
- step S104 is a schematic diagram of step S104 in a data caching method according to an embodiment of the present invention.
- FIG. 5 is a schematic diagram of a data caching device according to an embodiment of the present invention.
- FIG. 6 is a schematic diagram of a data caching device in accordance with another embodiment of the present invention.
- FIG. 7 is a schematic diagram of a data caching device in accordance with another embodiment of the present invention.
- a data caching method includes the following steps.
- the user terminal and the server use the data request message to access the data, wherein the user terminal may be a mobile phone, a tablet personal computer, a laptop computer, a personal digital assistant (Personal) Digital Assistant (PDA), Mobile Internet Device (MID) or Wearable Device.
- PDA Personal digital assistant
- MID Mobile Internet Device
- the storage mode of the server is distributed storage, and includes a cache device and a storage device.
- the cache device uses devices with faster read and write speeds, such as Solid State Drives (SSD), while the storage device uses devices with slower read and write speeds, such as Hard Disk Drive (HDD). .
- SSD Solid State Drives
- HDD Hard Disk Drive
- the data caching method separates cold data (data having a small number of accesses) and hot data (data having a large number of accesses), that is, storing the hot data in the SSD and storing the cold data in the In the HDD, based on a server's Hot Spot Detection (HOSD) module, control of data transmission between the cache device and the storage device is performed.
- HOSD Hot Spot Detection
- data transmission between the user terminal and the server may be performed through a public network, and the server is internally configured through a cluster network.
- the data transfer between the cache device and the storage device to implement data flow between the cache device and the storage device.
- Step S102 If it is detected that the cache device does not include the target access data requested by the data request message, the target access data in the storage device is sent to the user terminal.
- the server receives the data request message sent by the Objecter module in the user terminal, and determines the requested target access data based on the data request message, and then accesses the target access data. Compare with the data in the cache device. If the cache device does not include the target access data, the target access data in the storage device is transmitted to the user terminal, thereby reducing the impact of the cache operation on the I/O delay.
- the cache device may include a Filter module, a Promotion module, and an Agent module.
- the Promotion module is responsible for transmitting the target access data from the storage device to the cache device
- the Agent module is responsible for transmitting the dirty data (ie, the modified data) in the cache device to the storage device.
- the device or the cold data in the cache device ie, the data with less access times) is used, thereby improving the utilization of the storage resource and the data access hit rate of the user terminal.
- step S102 further includes: if it is detected that the cache device includes the target access data requested by the data request message, sending the target access data in the cache device to the user terminal. .
- Step S103 Extract parameter information of the target access data in the storage device, and determine whether the parameter information of the target access data matches the preset parameter condition.
- the parameter information may include the number of accesses, and may also include both the number of accesses and the access time. If the parameter information is the number of accesses, it is determined whether the number of accesses is greater than or equal to a preset first threshold, wherein the preset first threshold may be 3, 5, 6, or the like.
- the parameter information includes the number of accesses and the access time
- the preset second threshold may be 3 times, 5 times, 6 times, or the like.
- the access time of the target access data is within a preset period, and the time interval between the time when the target access data was last accessed and the current time is within a preset period.
- the access time of the target access data is within a preset time period, and may indicate that the time when the target access data was last accessed falls within a preset time period.
- the preset period may be, for example, one hour, one day, one week, or the like.
- Step S104 If the parameter information of the target access data matches the preset parameter condition, the target access data is transmitted to the cache device.
- the process of determining the parameter information of the target access data and the preset parameter condition is referred to step S103.
- step S104 includes: if the parameter information of the target access data matches the preset parameter condition, transmitting the target access data stored in the storage device to the cache device.
- steps S103 and S104 are performed asynchronously with steps S101 and S102.
- step S103 and step S104 can be performed in a predetermined cycle. This allows the cache operation to occur asynchronously, avoiding the impact of cache operations on I/O latency.
- the parameter information includes the number of accesses. If the parameter information of the target access data matches the preset parameter condition, the step of transmitting the target access data to the cache device includes: if the number of accesses of the target access data is greater than or equal to a preset first threshold, transmitting the target to the cache device
- the data is accessed, wherein the first threshold may be 3, 5, 6 or the like.
- the frequently accessed data can be transmitted to the cache device so that the user terminal directly accesses the cache device, and the infrequently accessed data remains in the storage device, thereby reducing the cache device and the storage device.
- the parameter information includes both the number of accesses and the access time. If the parameter information of the target access data matches the preset parameter condition, the step of transmitting the target access data to the cache device includes: if the number of accesses of the target access data is greater than or equal to a preset second threshold and the access time of the target access data During the preset period, the target access data is transmitted to the cache device. By increasing the judgment of the access time of the data, it is possible to further ensure the efficiency of the target access data.
- the data caching method further includes: detecting a cache occupancy rate of the cache device after the step of transmitting the target access data to the cache device; and clearing the cache if the cache occupancy rate is greater than or equal to a preset third threshold value
- the number of accesses in the device is less than or equal to a predetermined fourth threshold and/or the modified data in the cache device is transmitted to the storage device, wherein the third threshold may be 80%, 85%, 90%, or the like.
- the cache device can have redundant storage space for other data access through the above steps.
- the data whose access count is less than or equal to the preset fourth threshold is cold data, and the modified data in the cache device is dirty data.
- the number of accesses and access times of the data can be comprehensively considered to ensure that the data blocks replaced from the cache device have a low hit potential (ie, are unlikely to be accessed again).
- the implementation process is shown in Figure 4. It is mainly composed of four linked lists, namely MRU, MFU, MRUG and MFUG.
- MRU linked list One end of the MRU linked list is the MRU end, and the other end is the LRU end.
- the MFU end of the MFU linked list is the MFU end and the other end is the LFU end.
- the MFU linked list is also a finite sequence that is sorted according to the access time of the data. The difference is that each time a second hit occurs, the corresponding data in the MFU linked list is placed in the MFU header (MFU end). If there is data to enter the cache, and the number of data blocks in the cache has reached the previously set threshold, the elements are deleted from the LRU and LFU, and the corresponding metadata information is sent to the MFUG queue and the MRUG queue respectively.
- the MFUG and MRUG do not store data blocks, only the access records of the data blocks.
- the data block in the MFU linked list is sent to the MFUG linked list, and the storage space occupied by the data block is released. If the data block to be released is in the MRU linked list, the data block is deleted from the MRU linked list and sent to the MRUG linked list.
- Both the MFUG and MRUG linked lists are first in, first out (FIFO) linked lists with a threshold of x.
- FIFO first in, first out
- the HOSD module can dynamically adjust the number of elements that should be included in the MRU and MFU linked lists based on how many false hits occur in the MRUG or MFUG list.
- the adjustment method is as follows: When a false hit occurs in the MRUG linked list, the length of the MRU linked list is increased by 1, and the length of the MFU linked list is decreased by 1. When a pseudo hit occurs in the MFUG list, the MFU list length is increased by one, and the MRU list length is decreased by one. This ensures that the total length of the MRU and MFU linked lists in the cache remains constant.
- the method further comprises the steps of: redundantly backing up cache information in the memory of the cache device to the persistent storage device of the cache device after the step of transmitting the target access data to the cache device; and if detecting When a node failure or a system crash occurs in the cache device, the persistent cache information in the cache device is restored to the cache device.
- the cache metadata information in the memory of the cache device is packaged into an object for backup at regular intervals. The backup data is written to the persistent storage device of the cache device in a checkpoint manner by the write logic in the storage device, wherein the checkpoint is only executed periodically and does not bring load to the system.
- the cache metadata information is lost due to a node failure or system crash in the cache device
- the data backed up in the cache device's persistent storage device is restored to the cache device to ensure that in the event of a node failure or system crash It still works, ensuring the system's fault tolerance.
- the cache device stores only data satisfying the condition of the preset parameters (for example, data with a large number of accesses), and does not allow the data with less access times to occupy the storage space of the cache device, thereby improving The utilization of storage resources and the data access hit ratio of the user terminal.
- the condition of the preset parameters for example, data with a large number of accesses
- the data cache device 500 includes: a receiving module 501 for receiving a data request message sent by a user terminal; and a sending module 502, which detects whether the cache device includes a data request message.
- the target accesses the data, and if it detects that the cache device does not include the target access data requested by the data request message, transmits the target access data in the storage device to the user terminal; and the extraction module 503 is configured to extract the target access in the storage device Parameter information of the data, and determining whether the parameter information of the target access data matches the preset parameter condition; and the transmission module 504, if the parameter information of the target access data matches the preset parameter condition, the transmission module 504 transmits to the buffer device Target access data.
- the parameter information includes the number of visits. If the number of accesses to the target access data is greater than or equal to the preset first threshold, the transmission module 504 transmits the target access data to the cache device.
- the parameter information includes the number of visits and the access time. If the number of accesses of the target access data is greater than or equal to the preset second threshold and the access time of the target access data is within the preset period, the transmission module 504 transmits the target access data to the cache device.
- the sending module 502 is further configured to: if the cache device is detected to include the target access data requested by the data request message, send the user in the cache device to the user terminal Target access data.
- the data cache device 500 further includes: a detecting module 505 for detecting a cache occupancy rate of the cache device; and a processing module 506 if the cache occupancy rate is greater than or equal to a preset The third threshold, the processing module 506 clears the data in the cache device that the number of accesses is less than or equal to the preset fourth threshold and/or transmits the modified data in the cache device to the storage device.
- the data cache device 500 further includes: a backup module 507 for redundantly backing up cache information in the memory of the cache device to the persistent storage device of the cache device; and a recovery module 508. If a node failure or system crash occurs in the cache device, the recovery module 508 restores the cached information in the cache device to the cache device.
- data caching device 500 is included in a caching device.
- the cache device may include hardware that performs the functions of the respective modules.
- data caching device 500 is a device that is separate from the caching device and the storage device.
- the data caching device 500 can be utilized to implement various steps of the data caching method according to the present invention, and the utilization ratio of the storage resource and the data access hit ratio of the user terminal are improved.
- the program instructions cause a computer to perform a data caching method in accordance with the present invention.
- the computer readable medium can be, for example, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Disclosed in the present invention is a data caching method and apparatus, the data caching method comprising the following steps: receiving a data request message sent by a user terminal; if detecting that a cache apparatus does not comprise the target access data requested by the data request message, then sending the target access data in a storage apparatus to the user terminal; extracting parameter information of the target access data in the storage apparatus, and determining whether the parameter information matches a preset parameter condition; and, if the parameter information matches the preset parameter condition, then transmitting the target access data to the cache apparatus.
Description
本发明涉及数据存储技术领域,特别涉及数据缓存方法及数据缓存装置。The present invention relates to the field of data storage technologies, and in particular, to a data caching method and a data caching device.
分布式存储系统由多个存储装置、缓存装置和输入/输出(I/O,input/output)总线组成,各个存储装置之间通过I/O总线进行数据传输,并且基于存储装置之间的数据分散布局可以实现高效低廉的数据存储,分布式存储系统因其强大的扩展能力而被广泛应用于密集型计算和云计算领域。A distributed storage system is composed of a plurality of storage devices, a cache device, and an input/output (I/O) bus, and each storage device performs data transmission through an I/O bus, and is based on data between the storage devices. Decentralized layout enables efficient and inexpensive data storage, and distributed storage systems are widely used in intensive computing and cloud computing due to their powerful scalability.
常规的应用于分布式存储系统的数据缓存方法主要是采用按需调入的策略,即,如果检测到缓存装置中没有用户所需要的数据时,则会将存储在存储装置中的数据调入到缓存装置中以响应用户终端的需求。然而,由于缓存装置的容量有限,因此在上述响应过程中调入缓存装置中的数据会替换缓存装置的其他数据,并且还存在如下问题:由于在后续过程中不再访问新调入缓存装置中的数据或者访问次数较少但需要多次访问已被替换的数据,因此新调入的数据会占用缓存装置中的存储资源以致不能充分利用这些存储资源。另外,由于分布式存储模式下的缓存粒度较大,对数据块的缓存操作需要大量的网络带宽和存储读写开销。因此,应用于分布式存储系统的数据缓存方法存在存储资源利用率低的问题。The conventional data caching method applied to a distributed storage system mainly adopts an on-demand policy, that is, if it is detected that there is no data required by the user in the cache device, the data stored in the storage device is loaded. Go to the cache device in response to the needs of the user terminal. However, due to the limited capacity of the cache device, the data transferred into the cache device during the above response process replaces other data of the cache device, and there is also a problem that since the new call-in buffer device is no longer accessed in the subsequent process. The data or access times are small but multiple accesses to the replaced data are required, so the newly loaded data will occupy the storage resources in the cache device so that the storage resources cannot be fully utilized. In addition, due to the large cache size in the distributed storage mode, the buffer operation on the data block requires a large amount of network bandwidth and storage read and write overhead. Therefore, the data caching method applied to the distributed storage system has a problem that the storage resource utilization rate is low.
发明内容Summary of the invention
本发明提供了一种数据缓存方法及数据缓存装置。The invention provides a data buffering method and a data buffering device.
根据本发明的一个实施例,提供一种数据缓存方法,该方法包括如下步骤:接收用户终端发送的数据请求消息;如果检测到缓存 装置不包括所述数据请求消息所请求的目标访问数据,则向所述用户终端发送存储装置中的所述目标访问数据;提取与所述存储装置中的所述目标访问数据相关的参数信息,并判断所述参数信息是否与预设的参数条件匹配;以及如果所述参数信息与预设的参数条件匹配,则向所述缓存装置传输所述目标访问数据。According to an embodiment of the present invention, a data caching method is provided, the method comprising the steps of: receiving a data request message sent by a user terminal; if detecting that the caching device does not include the target access data requested by the data request message, Transmitting, to the user terminal, the target access data in the storage device; extracting parameter information related to the target access data in the storage device, and determining whether the parameter information matches a preset parameter condition; And if the parameter information matches a preset parameter condition, transmitting the target access data to the cache device.
在一些实施方式中,所述参数信息包括访问次数;并且,如果所述参数信息与所述预设的参数条件匹配,则向所述缓存装置传输所述目标访问数据的步骤包括:如果所述访问次数大于或等于预设的第一阈值,则向所述缓存装置传输所述目标访问数据。In some embodiments, the parameter information includes an access number; and, if the parameter information matches the preset parameter condition, transmitting the target access data to the cache device comprises: if The number of accesses is greater than or equal to a preset first threshold, and the target access data is transmitted to the cache device.
在一些实施方式中,所述参数信息包括访问次数和访问时间,并且如果所述参数信息与所述预设的参数条件匹配,则向所述缓存装置传输所述目标访问数据的步骤包括:如果所述访问次数大于或等于预设的第二阈值且所述访问时间在预设周期内,则向所述缓存装置传输所述目标访问数据。In some embodiments, the parameter information includes an access number and an access time, and if the parameter information matches the preset parameter condition, the step of transmitting the target access data to the cache device comprises: if The target access data is transmitted to the cache device when the number of accesses is greater than or equal to a preset second threshold and the access time is within a preset period.
在一些实施方式中,在向所述缓存装置传输所述目标访问数据的步骤之后,所述方法还包括:检测所述缓存装置的缓存占用率;以及如果所述缓存占用率大于或等于预设的第三阈值,则清除所述缓存装置中的访问次数小于或等于预设的第四阈值的数据和/或向所述存储装置传输所述缓存装置中的已修改的数据。In some embodiments, after the step of transmitting the target access data to the cache device, the method further comprises: detecting a cache occupancy rate of the cache device; and if the cache occupancy rate is greater than or equal to a preset And a third threshold, the data of the access times in the cache device being less than or equal to a preset fourth threshold is cleared and/or the modified data in the cache device is transmitted to the storage device.
在一些实施方式中,在接收用户终端发送的数据请求消息的步骤之后,所述方法还包括:将所述缓存装置的内存中的缓存信息冗余备份至所述缓存装置的持久化存储设备;以及如果检测到在所述缓存装置发生节点故障或系统崩溃,则将所述缓存装置中的持久化的缓存信息恢复至所述缓存装置。In some embodiments, after the step of receiving the data request message sent by the user terminal, the method further includes: redundantly backing up the cache information in the memory of the cache device to the persistent storage device of the cache device; And if it detects that a node failure or a system crash occurs in the cache device, restoring cached information in the cache device to the cache device.
在一些实施方式中,在向所述缓存装置传输所述目标访问数据的步骤之后,所述方法还包括:如果检测到缓存装置包括所述数据请求消息所请求的目标访问数据,则向所述用户终端发送所述缓存装置中的所述目标访问数据。In some embodiments, after the step of transmitting the target access data to the cache device, the method further comprising: if detecting that the cache device includes the target access data requested by the data request message, The user terminal transmits the target access data in the cache device.
根据本发明的另一个实施例,提供一种存储有程序指令的计算机可读取介质,所述程序指令使计算机执行上述数据缓存方法。In accordance with another embodiment of the present invention, a computer readable medium storing program instructions for causing a computer to perform the data caching method described above is provided.
根据本发明的另一个实施例,提供一种数据缓存设备,所述数据缓存设备包括:接收模块,配置为接收用户终端发送的数据请求消息;发送模块,配置为如果检测到缓存装置不包括所述数据请求消息所请求的目标访问数据,则向所述用户终端发送存储装置中的所述目标访问数据;提取模块,配置为提取与所述存储装置中的所述目标访问数据相关的参数信息,并判断所述参数信息是否与预设的参数条件匹配;以及传输模块,配置为如果所述参数信息与所述预设的参数条件匹配,则向所述缓存装置传输所述目标访问数据。According to another embodiment of the present invention, a data cache device is provided. The data cache device includes: a receiving module configured to receive a data request message sent by the user terminal; and a sending module configured to detect that the cache device does not include the Transmitting, by the data request message, the target access data, the target access data in the storage device to the user terminal; and the extracting module configured to extract parameter information related to the target access data in the storage device And determining whether the parameter information matches a preset parameter condition; and the transmitting module is configured to transmit the target access data to the cache device if the parameter information matches the preset parameter condition.
在一些实施方式中,所述参数信息包括访问次数;并且所述预设的参数条件为:所述访问次数大于或等于预设的第二阈值。In some embodiments, the parameter information includes a number of accesses; and the preset parameter condition is that the number of accesses is greater than or equal to a preset second threshold.
在一些实施方式中,所述参数信息包括访问次数和访问时间;并且所述预设的参数条件为:所述访问次数大于或等于预设的第二阈值且所述访问时间在预设周期内。In some implementations, the parameter information includes an access number and an access time; and the preset parameter condition is: the access times are greater than or equal to a preset second threshold, and the access time is within a preset period. .
在一些实施方式中,所述数据缓存设备还包括:检测模块,配置为检测所述缓存装置的缓存占有率;以及处理模块,配置为如果所述缓存占用率大于或等于预设的第三阈值,则清除所述缓存装置中的访问次数小于或等于预设的第四阈值的数据和/或向所述存储装置传输所述缓存装置中的已修改的数据。In some embodiments, the data caching device further includes: a detecting module configured to detect a cache occupancy rate of the caching device; and a processing module configured to: if the buffer occupancy rate is greater than or equal to a preset third threshold And clearing data in the cache device that is less than or equal to a preset fourth threshold and/or transmitting the modified data in the cache device to the storage device.
在一些实施方式中,所述数据缓存设备还包括:备份模块,配置为将所述缓存装置的内存中的缓存信息冗余备份至所述缓存装置的持久化存储设备;以及恢复模块,配置为如果检测到在所述缓存装置中发生节点故障或系统崩溃,则将所述缓存装置中的持久化的缓存信息恢复至所述缓存装置。In some embodiments, the data caching device further includes: a backup module configured to redundantly back up cache information in the memory of the cache device to the persistent storage device of the cache device; and a recovery module configured to If it is detected that a node failure or a system crash occurs in the cache device, the cached information in the cache device is restored to the cache device.
在一些实施方式中,所述发送模块还配置为:如果检测到所述缓存装置包括所述数据请求消息所请求的所述目标访问数据,则向所述用户终端发送所述缓存装置中的所述目标访问数据。In some embodiments, the sending module is further configured to: if the cache device is detected to include the target access data requested by the data request message, send the location in the cache device to the user terminal The target access data.
图1为根据本发明的实施例的数据缓存方法的流程图;1 is a flow chart of a data caching method in accordance with an embodiment of the present invention;
图2为根据本发明的实施例的数据缓存方法的示意图;2 is a schematic diagram of a data caching method according to an embodiment of the present invention;
图3为根据本发明的实施例的数据缓存方法中的步骤S102的示意图;FIG. 3 is a schematic diagram of step S102 in a data caching method according to an embodiment of the present invention; FIG.
图4为根据本发明的实施例的数据缓存方法中的步骤S104的示意图;4 is a schematic diagram of step S104 in a data caching method according to an embodiment of the present invention;
图5为根据本发明的实施例的数据缓存设备的示意图;FIG. 5 is a schematic diagram of a data caching device according to an embodiment of the present invention; FIG.
图6为根据本发明的另一个实施例的数据缓存设备的示意图;6 is a schematic diagram of a data caching device in accordance with another embodiment of the present invention;
以及as well as
图7为根据本发明的另一个实施例的数据缓存设备的示意图。7 is a schematic diagram of a data caching device in accordance with another embodiment of the present invention.
在下文中,将参考附图对本发明的示例性实施例进行详细描述。如图1所示,根据本发明的实施例的数据缓存方法包括以下步骤。步骤S101:接收用户终端发送的数据请求消息。该步骤中,用户终端与服务器之间利用数据请求消息来进行数据的访问,其中,用户终端可以是手机、平板电脑(Tablet Personal Computer)、膝上型电脑(Laptop Computer)、个人数字助理(Personal Digital Assistant,简称PDA)、移动上网装置(Mobile Internet Device,MID)或可穿戴式设备(Wearable Device)等。服务器的存储方式为分布式存储,并包括缓存装置和存储装置。Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings. As shown in FIG. 1, a data caching method according to an embodiment of the present invention includes the following steps. Step S101: Receive a data request message sent by the user terminal. In this step, the user terminal and the server use the data request message to access the data, wherein the user terminal may be a mobile phone, a tablet personal computer, a laptop computer, a personal digital assistant (Personal) Digital Assistant (PDA), Mobile Internet Device (MID) or Wearable Device. The storage mode of the server is distributed storage, and includes a cache device and a storage device.
如图2所示,缓存装置使用读写速度较快的设备,例如固态驱动器(Solid State Drives,SSD),而存储装置使用读写速度较慢的设备,例如硬盘驱动器(Hard Disk Drive,HDD)。As shown in Figure 2, the cache device uses devices with faster read and write speeds, such as Solid State Drives (SSD), while the storage device uses devices with slower read and write speeds, such as Hard Disk Drive (HDD). .
根据本实施例的一个实例的数据缓存方法是将冷数据(访问次数少的数据)和热数据(访问次数多的数据)进行分离,即,将热数据存储在SSD中而将冷数据存储在HDD中,并基于服务器的热点检测(Hot Spot Detection,HOSD)模块进行缓存装置与存储装置之间的数据传输的控制。需要说明的是,为了确保用户终端与服务器之间的数据请求的稳定性,可以通过外部网(public network)进行用户终端与服务器之间的数据传输,并通过集群网(cluster network)进行服务器内部的缓存装置与存储装置之间的数据传输, 以实现缓存装置和存储装置之间的数据流动。The data caching method according to an example of the present embodiment separates cold data (data having a small number of accesses) and hot data (data having a large number of accesses), that is, storing the hot data in the SSD and storing the cold data in the In the HDD, based on a server's Hot Spot Detection (HOSD) module, control of data transmission between the cache device and the storage device is performed. It should be noted that, in order to ensure the stability of the data request between the user terminal and the server, data transmission between the user terminal and the server may be performed through a public network, and the server is internally configured through a cluster network. The data transfer between the cache device and the storage device to implement data flow between the cache device and the storage device.
步骤S102:如果检测到缓存装置不包括数据请求消息所请求的目标访问数据,则向用户终端发送存储装置中的目标访问数据。在该步骤中,如图3所示,服务器(未示出)接收用户终端中的Objecter模块发送的数据请求消息,并基于该数据请求消息确定所请求的目标访问数据,然后将该目标访问数据与缓存装置中的数据进行对比。如果缓存装置不包括目标访问数据,则向用户终端发送存储装置中的目标访问数据,从而减少缓存操作对I/O延时的影响。缓存装置中可包括Filter模块、Promotion模块和Agent模块,Promotion模块负责将目标访问数据从存储装置中传输至缓存装置,Agent模块负责将缓存装置中的脏数据(即被修改的数据)传输至存储装置或清除缓存装置中的冷数据(即访问次数少的数据),从而提高存储资源的利用率和用户终端的数据访问命中率。Step S102: If it is detected that the cache device does not include the target access data requested by the data request message, the target access data in the storage device is sent to the user terminal. In this step, as shown in FIG. 3, the server (not shown) receives the data request message sent by the Objecter module in the user terminal, and determines the requested target access data based on the data request message, and then accesses the target access data. Compare with the data in the cache device. If the cache device does not include the target access data, the target access data in the storage device is transmitted to the user terminal, thereby reducing the impact of the cache operation on the I/O delay. The cache device may include a Filter module, a Promotion module, and an Agent module. The Promotion module is responsible for transmitting the target access data from the storage device to the cache device, and the Agent module is responsible for transmitting the dirty data (ie, the modified data) in the cache device to the storage device. The device or the cold data in the cache device (ie, the data with less access times) is used, thereby improving the utilization of the storage resource and the data access hit rate of the user terminal.
在本实施例的一个实例中,步骤S102还包括:如果检测到缓存装置包括所述数据请求消息所请求的目标访问数据,则向所述用户终端发送所述缓存装置中的所述目标访问数据。In an example of the embodiment, step S102 further includes: if it is detected that the cache device includes the target access data requested by the data request message, sending the target access data in the cache device to the user terminal. .
步骤S103:提取存储装置中的目标访问数据的参数信息,并判断目标访问数据的参数信息是否与预设的参数条件匹配。在该步骤中,参数信息可以包括访问次数,也可以包括访问次数和访问时间两者。如果参数信息为访问次数,则判断访问次数是否大于或等于预设的第一阈值,其中,预设的第一阈值可以是3次、5次、6次等。Step S103: Extract parameter information of the target access data in the storage device, and determine whether the parameter information of the target access data matches the preset parameter condition. In this step, the parameter information may include the number of accesses, and may also include both the number of accesses and the access time. If the parameter information is the number of accesses, it is determined whether the number of accesses is greater than or equal to a preset first threshold, wherein the preset first threshold may be 3, 5, 6, or the like.
此外,如果参数信息包括访问次数和访问时间,则判断目标访问数据的访问次数是否大于或等于预设的第二阈值且目标访问数据的访问时间是否在预设周期内。这里,预设的第二阈值可以是3次、5次、6次等。在本实施例的一个实例中,目标访问数据的访问时间在预设周期内,表示目标访问数据最近一次被访问的时间与当前时间之间的时间间隔在预设周期内。在本实施例的另一个实例中,目标访问数据的访问时间在预设时间周期内,可表示目标访问数据最近一次被访问的时间落在预设时间周期内。所述预设周期例如可以是一个小时、一天、一周等。通过增加对数据的访问时间的判断, 可以确保目标访问数据的高效性。In addition, if the parameter information includes the number of accesses and the access time, it is determined whether the number of accesses of the target access data is greater than or equal to a preset second threshold and whether the access time of the target access data is within a preset period. Here, the preset second threshold may be 3 times, 5 times, 6 times, or the like. In an example of the embodiment, the access time of the target access data is within a preset period, and the time interval between the time when the target access data was last accessed and the current time is within a preset period. In another example of the embodiment, the access time of the target access data is within a preset time period, and may indicate that the time when the target access data was last accessed falls within a preset time period. The preset period may be, for example, one hour, one day, one week, or the like. By increasing the judgment of the access time of the data, it is possible to ensure the efficiency of the target access data.
步骤S104:如果目标访问数据的参数信息与预设的参数条件匹配,则向缓存装置传输目标访问数据。该步骤中,目标访问数据的参数信息与预设的参数条件匹配的判断过程参见步骤S103。通过将符合预设的参数条件的目标访问数据传输至缓存装置以使存储在缓存装置中的数据为高命中潜力的数据,有效地解决了缓存装置的缓存污染的问题,并提高了缓存的利用率。Step S104: If the parameter information of the target access data matches the preset parameter condition, the target access data is transmitted to the cache device. In this step, the process of determining the parameter information of the target access data and the preset parameter condition is referred to step S103. By transmitting the target access data conforming to the preset parameter condition to the cache device to make the data stored in the cache device high-potential potential data, the problem of cache pollution of the cache device is effectively solved, and the use of the cache is improved. rate.
在本实施例的一个实例中,步骤S104包括:如果目标访问数据的参数信息与预设的参数条件匹配,则将存储在存储装置中的目标访问数据传输至缓存装置。In an example of the embodiment, step S104 includes: if the parameter information of the target access data matches the preset parameter condition, transmitting the target access data stored in the storage device to the cache device.
在本实施例的一个实例中,步骤S103和S104是与步骤S101和S102异步进行的。例如,可以按预定的周期执行步骤S103和步骤S104。这样让缓存操作异步进行,避免了缓存操作对I/O延迟的影响。In an example of the embodiment, steps S103 and S104 are performed asynchronously with steps S101 and S102. For example, step S103 and step S104 can be performed in a predetermined cycle. This allows the cache operation to occur asynchronously, avoiding the impact of cache operations on I/O latency.
在本实施例的一个实例中,参数信息包括访问次数。如果目标访问数据的参数信息与预设的参数条件匹配,则向缓存装置传输目标访问数据的步骤包括:如果目标访问数据的访问次数大于或者等于预设的第一阈值,则向缓存装置传输目标访问数据,其中,第一阈值可以是3次、5次、6次等。在该实例中,通过上述步骤可以把访问频繁的数据传输至缓存装置以便于用户终端直接访问缓存装置,而让访问不频繁的数据继续留在存储装置中,从而可以减少缓存装置与存储装置之间的数据流动的开销。由于存储在缓存装置中的数据都是访问频繁的数据,因此可以减少缓存污染。In an example of this embodiment, the parameter information includes the number of accesses. If the parameter information of the target access data matches the preset parameter condition, the step of transmitting the target access data to the cache device includes: if the number of accesses of the target access data is greater than or equal to a preset first threshold, transmitting the target to the cache device The data is accessed, wherein the first threshold may be 3, 5, 6 or the like. In this example, through the above steps, the frequently accessed data can be transmitted to the cache device so that the user terminal directly accesses the cache device, and the infrequently accessed data remains in the storage device, thereby reducing the cache device and the storage device. The overhead of data flow between. Since the data stored in the cache device is frequently accessed data, the cache pollution can be reduced.
在本实施例的另一个实例中,参数信息包括访问次数和访问时间两者。如果目标访问数据的参数信息与预设的参数条件匹配,则向缓存装置传输目标访问数据的步骤包括:如果目标访问数据的访问次数大于或者等于预设的第二阈值且目标访问数据的访问时间在预设周期内,则向缓存装置传输目标访问数据。通过增加对数据的访问时间的判断,可以进一步确保目标访问数据的高效性。In another example of this embodiment, the parameter information includes both the number of accesses and the access time. If the parameter information of the target access data matches the preset parameter condition, the step of transmitting the target access data to the cache device includes: if the number of accesses of the target access data is greater than or equal to a preset second threshold and the access time of the target access data During the preset period, the target access data is transmitted to the cache device. By increasing the judgment of the access time of the data, it is possible to further ensure the efficiency of the target access data.
在一些实施例中,数据缓存方法还包括:在向缓存装置传输目 标访问数据的步骤之后,检测缓存装置的缓存占用率;以及如果缓存占用率大于或等于预设的第三阈值,则清除缓存装置中的访问次数小于或等于预设的第四阈值的数据和/或向存储装置传输缓存装置中的已修改的数据,其中,第三阈值可以是80%、85%、90%等。在该实施例中,通过上述步骤可以使缓存装置具有多余的存储空间以供其他数据访问。访问次数小于或等于预设的第四阈值的数据为冷数据,而缓存装置中的已修改的数据为脏数据。在进行数据替换时,可以综合考虑数据的访问次数和访问时间,以保证从缓存装置中替换出去的数据块都具有较低的命中潜力(即,不太可能被再次访问)。In some embodiments, the data caching method further includes: detecting a cache occupancy rate of the cache device after the step of transmitting the target access data to the cache device; and clearing the cache if the cache occupancy rate is greater than or equal to a preset third threshold value The number of accesses in the device is less than or equal to a predetermined fourth threshold and/or the modified data in the cache device is transmitted to the storage device, wherein the third threshold may be 80%, 85%, 90%, or the like. In this embodiment, the cache device can have redundant storage space for other data access through the above steps. The data whose access count is less than or equal to the preset fourth threshold is cold data, and the modified data in the cache device is dirty data. When data replacement is performed, the number of accesses and access times of the data can be comprehensively considered to ensure that the data blocks replaced from the cache device have a low hit potential (ie, are unlikely to be accessed again).
其实现过程如图4所示,主要由四个链表构成,分别是MRU,MFU,MRUG和MFUG。其中,MRU链表的队列一端为MRU端,另一端为LRU端,MFU链表的队列一端为MFU端,另一端为LFU端。当数据刚进入缓存装置时,先将数据放入到MRU队列中,MRU队列是依据数据块的访问时间进行排序的一个有限序列。当一个新的数据再进入MRU队列时,MRU队列的LRU端的数据块(即,时间最久未被访问的数据块)将被替换出去。如果MRU队列中的某个数据块在被替换之前被二次访问,那么将该数据放入到MFU队列中的MFU端。The implementation process is shown in Figure 4. It is mainly composed of four linked lists, namely MRU, MFU, MRUG and MFUG. One end of the MRU linked list is the MRU end, and the other end is the LRU end. The MFU end of the MFU linked list is the MFU end and the other end is the LFU end. When the data just enters the cache device, the data is first put into the MRU queue, and the MRU queue is a finite sequence sorted according to the access time of the data block. When a new data enters the MRU queue again, the data block at the LRU end of the MRU queue (ie, the block of data that has not been accessed for the longest time) will be replaced. If a data block in the MRU queue is accessed twice before being replaced, the data is placed into the MFU end of the MFU queue.
MFU链表也是根据数据的访问时间进行排序的一个有限序列。所不同的是,每发生一次二次命中,都是把MFU链表中的对应数据放到MFU头部(MFU端)。如果有数据需要进入缓存中,而此时缓存中的数据块数目已经到了之前设定的阈值,则会从LRU和LFU端删除元素,并将对应元数据信息分别送入MFUG队列和MRUG队列。The MFU linked list is also a finite sequence that is sorted according to the access time of the data. The difference is that each time a second hit occurs, the corresponding data in the MFU linked list is placed in the MFU header (MFU end). If there is data to enter the cache, and the number of data blocks in the cache has reached the previously set threshold, the elements are deleted from the LRU and LFU, and the corresponding metadata information is sent to the MFUG queue and the MRUG queue respectively.
MFUG和MRUG并不存储数据块,只存储数据块的访问记录。将MFU链表中的数据块送入MFUG链表,同时释放该数据块所占用的存储空间。如果要释放的数据块在MRU链表中,则将该数据块从MRU链表中删除,并送入MRUG链表。The MFUG and MRUG do not store data blocks, only the access records of the data blocks. The data block in the MFU linked list is sent to the MFUG linked list, and the storage space occupied by the data block is released. If the data block to be released is in the MRU linked list, the data block is deleted from the MRU linked list and sent to the MRUG linked list.
MFUG和MRUG链表均为先进先出(FIFO)的链表,其长度为阈值x。当链表长度增大到等于x时,将链表中最久的访问记录删除。 当再次访问该数据块的时,如果数据块在MRUG或MFUG链表中,则从存储池读取该数据库,并重新将该数据块插入到MRU或MFU。HOSD模块可以根据在MRUG或MFUG链表中发生伪命中次数的多少来动态地调整MRU和MFU这两个链表应包含的元素的个数。调整方法如下:当在MRUG链表中发生1次伪命中时,则将MRU链表长度增加1,并将MFU链表长度减小1。当在MFUG链表中发生1次伪命中,则将MFU链表长度增加1,并将MRU链表长度减小1。这样能够确保缓存中的MRU和MFU链表的总长度保持恒定。Both the MFUG and MRUG linked lists are first in, first out (FIFO) linked lists with a threshold of x. When the length of the linked list increases to equal x, the oldest access record in the linked list is deleted. When the data block is accessed again, if the data block is in the MRUG or MFUG linked list, the database is read from the storage pool and the data block is reinserted into the MRU or MFU. The HOSD module can dynamically adjust the number of elements that should be included in the MRU and MFU linked lists based on how many false hits occur in the MRUG or MFUG list. The adjustment method is as follows: When a false hit occurs in the MRUG linked list, the length of the MRU linked list is increased by 1, and the length of the MFU linked list is decreased by 1. When a pseudo hit occurs in the MFUG list, the MFU list length is increased by one, and the MRU list length is decreased by one. This ensures that the total length of the MRU and MFU linked lists in the cache remains constant.
在一些实施例中,该方法还包括如下步骤:在向缓存装置传输目标访问数据的步骤之后,将缓存装置的内存中的缓存信息冗余备份至缓存装置的持久化存储设备;以及如果检测到在缓存装置中发生节点故障或系统崩溃,则将缓存装置中的持久化的缓存信息恢复至缓存装置。在该实施例中,每隔一段时间将缓存装置的内存中的缓存元数据信息打包成一个对象进行备份。通过存储装置中的写入逻辑将备份数据以检查点的方式写入到缓存装置的持久化存储设备,其中,检查点只是周期性地执行,并不会给系统带来负载。当检测到在缓存装置中发生节点故障或系统崩溃而导致缓存元数据信息丢失时,将备份在缓存装置的持久化存储设备中的数据恢复至缓存装置,以确保在发生节点故障或者系统崩溃时仍然能正常工作,从而确保了系统的容错能力。In some embodiments, the method further comprises the steps of: redundantly backing up cache information in the memory of the cache device to the persistent storage device of the cache device after the step of transmitting the target access data to the cache device; and if detecting When a node failure or a system crash occurs in the cache device, the persistent cache information in the cache device is restored to the cache device. In this embodiment, the cache metadata information in the memory of the cache device is packaged into an object for backup at regular intervals. The backup data is written to the persistent storage device of the cache device in a checkpoint manner by the write logic in the storage device, wherein the checkpoint is only executed periodically and does not bring load to the system. When the cache metadata information is lost due to a node failure or system crash in the cache device, the data backed up in the cache device's persistent storage device is restored to the cache device to ensure that in the event of a node failure or system crash It still works, ensuring the system's fault tolerance.
利用根据本发明的实施例的数据缓存方法,缓存装置只存储满足预设参数条件的数据(例如访问次数多的数据),而不会让访问次数少的数据占用缓存装置的存储空间,从而提高了存储资源的利用率和用户终端的数据访问命中率。With the data caching method according to the embodiment of the present invention, the cache device stores only data satisfying the condition of the preset parameters (for example, data with a large number of accesses), and does not allow the data with less access times to occupy the storage space of the cache device, thereby improving The utilization of storage resources and the data access hit ratio of the user terminal.
如图5所示,根据本发明的实施例的数据缓存设备500包括:接收模块501,其用于接收用户终端发送的数据请求消息;发送模块502,其检测缓存装置是否包括数据请求消息所请求的目标访问数据,并且如果检测到缓存装置不包括数据请求消息所请求的目标访问数据,则向用户终端发送存储装置中的目标访问数据;提取模块503,其用于提取存储装置中的目标访问数据的参数信息,并判断目 标访问数据的参数信息是否与预设的参数条件匹配;以及传输模块504,如果目标访问数据的参数信息与预设的参数条件匹配,则传输模块504向缓存装置传输目标访问数据。As shown in FIG. 5, the data cache device 500 according to an embodiment of the present invention includes: a receiving module 501 for receiving a data request message sent by a user terminal; and a sending module 502, which detects whether the cache device includes a data request message. The target accesses the data, and if it detects that the cache device does not include the target access data requested by the data request message, transmits the target access data in the storage device to the user terminal; and the extraction module 503 is configured to extract the target access in the storage device Parameter information of the data, and determining whether the parameter information of the target access data matches the preset parameter condition; and the transmission module 504, if the parameter information of the target access data matches the preset parameter condition, the transmission module 504 transmits to the buffer device Target access data.
在一些实施例中,参数信息包括访问次数。如果目标访问数据的访问次数大于或等于预设的第一阈值,则传输模块504向缓存装置传输目标访问数据。In some embodiments, the parameter information includes the number of visits. If the number of accesses to the target access data is greater than or equal to the preset first threshold, the transmission module 504 transmits the target access data to the cache device.
在一些实施例中,参数信息包括访问次数和访问时间。如果目标访问数据的访问次数大于或等于预设的第二阈值且目标访问数据的访问时间在预设周期内,则传输模块504向缓存装置传输目标访问数据。In some embodiments, the parameter information includes the number of visits and the access time. If the number of accesses of the target access data is greater than or equal to the preset second threshold and the access time of the target access data is within the preset period, the transmission module 504 transmits the target access data to the cache device.
在一些实施例中,发送模块502还配置为:如果检测到所述缓存装置包括所述数据请求消息所请求的所述目标访问数据,则向所述用户终端发送所述缓存装置中的所述目标访问数据。In some embodiments, the sending module 502 is further configured to: if the cache device is detected to include the target access data requested by the data request message, send the user in the cache device to the user terminal Target access data.
在一些实施例中,如图6所示,数据缓存设备500还包括:检测模块505,其用于检测缓存装置的缓存占有率;以及处理模块506,如果该缓存占用率大于或者等于预设的第三阈值,则处理模块506清除缓存装置中访问次数小于或等于预设的第四阈值的数据和/或向存储装置传输缓存装置中的已修改的数据。In some embodiments, as shown in FIG. 6, the data cache device 500 further includes: a detecting module 505 for detecting a cache occupancy rate of the cache device; and a processing module 506 if the cache occupancy rate is greater than or equal to a preset The third threshold, the processing module 506 clears the data in the cache device that the number of accesses is less than or equal to the preset fourth threshold and/or transmits the modified data in the cache device to the storage device.
在一些实施例中,如图7所示,数据缓存设备500还包括:备份模块507,其用于将缓存装置的内存中的缓存信息冗余备份至缓存装置的持久化存储设备;以及恢复模块508,如果检测到在缓存装置发生节点故障或者系统崩溃,则恢复模块508将缓存装置中的持久化的缓存信息恢复至缓存装置。In some embodiments, as shown in FIG. 7, the data cache device 500 further includes: a backup module 507 for redundantly backing up cache information in the memory of the cache device to the persistent storage device of the cache device; and a recovery module 508. If a node failure or system crash occurs in the cache device, the recovery module 508 restores the cached information in the cache device to the cache device.
在一些实施例中,数据缓存设备500包含在缓存装置中。在这种情况下,缓存装置可包括执行各个模块的功能的硬件。In some embodiments, data caching device 500 is included in a caching device. In this case, the cache device may include hardware that performs the functions of the respective modules.
在一些实施例中,数据缓存设备500是与缓存装置和存储装置相独立的设备。In some embodiments, data caching device 500 is a device that is separate from the caching device and the storage device.
需要说明的是,可以利用数据缓存设备500实现根据本发明的数据缓存方法的各个步骤,并且提高了存储资源的利用率和用户终端的数据访问命中率。It should be noted that the data caching device 500 can be utilized to implement various steps of the data caching method according to the present invention, and the utilization ratio of the storage resource and the data access hit ratio of the user terminal are improved.
本领域普通技术人员可以理解的是,可以利用与程序指令相关的硬件来实现上述实施例的全部或者部分步骤和/或模块,并且可以将程序指令存储到计算机可读取介质中。程序指令使计算机执行根据本发明的数据缓存方法。计算机可读取介质可以为例如只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或光盘等。One of ordinary skill in the art will appreciate that all or part of the steps and/or modules of the above-described embodiments may be implemented in hardware associated with program instructions, and program instructions may be stored in a computer readable medium. The program instructions cause a computer to perform a data caching method in accordance with the present invention. The computer readable medium can be, for example, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk.
对于本领域的普通技术人员来说,可以在不脱离本发明较广义的精神和范围的情况下对本发明进行多种变型和修改。因此,应该认为本说明书和附图是示例性的而不是限制性的。Numerous modifications and changes may be made to the present invention without departing from the spirit and scope of the invention. Accordingly, the specification and drawings are to be regarded as
Claims (13)
- 一种数据缓存方法,包括如下步骤:A data caching method includes the following steps:接收用户终端发送的数据请求消息;Receiving a data request message sent by the user terminal;如果检测到缓存装置不包括所述数据请求消息所请求的目标访问数据,则向所述用户终端发送存储装置中的所述目标访问数据;If it is detected that the cache device does not include the target access data requested by the data request message, transmitting the target access data in the storage device to the user terminal;提取与所述存储装置中的所述目标访问数据相关的参数信息,并判断所述参数信息是否与预设的参数条件匹配;以及Extracting parameter information related to the target access data in the storage device, and determining whether the parameter information matches a preset parameter condition;如果所述参数信息与所述预设的参数条件匹配,则向所述缓存装置传输所述目标访问数据。And if the parameter information matches the preset parameter condition, transmitting the target access data to the cache device.
- 根据权利要求1所述的方法,其中,The method of claim 1 wherein所述参数信息包括访问次数,并且The parameter information includes the number of accesses, and如果所述参数信息与所述预设的参数条件匹配,则向所述缓存装置传输所述目标访问数据的步骤包括:如果所述访问次数大于或等于预设的第一阈值,则向所述缓存装置传输所述目标访问数据。If the parameter information matches the preset parameter condition, the step of transmitting the target access data to the cache device includes: if the number of accesses is greater than or equal to a preset first threshold, then The cache device transmits the target access data.
- 根据权利要求1所述的方法,其中,The method of claim 1 wherein所述参数信息包括访问次数和访问时间,并且The parameter information includes the number of accesses and the access time, and如果所述参数信息与所述预设的参数条件匹配,则向所述缓存装置传输所述目标访问数据的步骤包括:如果所述访问次数大于或等于预设的第二阈值且所述访问时间在预设周期内,则向所述缓存装置传输所述目标访问数据。If the parameter information matches the preset parameter condition, the step of transmitting the target access data to the cache device includes: if the number of accesses is greater than or equal to a preset second threshold and the access time The target access data is transmitted to the cache device during a preset period.
- 根据权利要求1至3中任一项所述的方法,其中,在向所述缓存装置传输所述目标访问数据的步骤之后,所述方法还包括:The method according to any one of claims 1 to 3, wherein after the step of transmitting the target access data to the cache device, the method further comprises:检测所述缓存装置的缓存占用率;以及Detecting a cache occupancy rate of the cache device;如果所述缓存占用率大于或等于预设的第三阈值,则清除所述缓存装置中的访问次数小于或等于预设的第四阈值的数据和/或向所述存储装置传输所述缓存装置中的已修改的数据。If the cache occupancy is greater than or equal to a preset third threshold, clearing data of the number of accesses in the cache device that is less than or equal to a preset fourth threshold and/or transmitting the cache device to the storage device Modified data in .
- 根据权利要求1至3中任一项所述的方法,其中,在接收用户终端发送的数据请求消息的步骤之后,所述方法还包括:The method according to any one of claims 1 to 3, wherein after the step of receiving a data request message sent by the user terminal, the method further comprises:将所述缓存装置的内存中的缓存信息冗余备份至所述缓存装置的持久化存储设备;以及Redundantly backing up cache information in the memory of the cache device to a persistent storage device of the cache device;如果检测到在所述缓存装置发生节点故障或系统崩溃,则将所述缓存装置中的持久化的缓存信息恢复至所述缓存装置。If it is detected that a node failure or a system crash occurs in the cache device, the persistent cache information in the cache device is restored to the cache device.
- 根据权利要求1至3中任一项所述的方法,其中,在向所述缓存装置传输所述目标访问数据的步骤之后,所述方法还包括:The method according to any one of claims 1 to 3, wherein after the step of transmitting the target access data to the cache device, the method further comprises:如果检测到缓存装置包括所述数据请求消息所请求的目标访问数据,则向所述用户终端发送所述缓存装置中的所述目标访问数据。If it is detected that the cache device includes the target access data requested by the data request message, the target access data in the cache device is sent to the user terminal.
- 一种数据缓存设备,包括:A data caching device comprising:接收模块,配置为接收用户终端发送的数据请求消息;a receiving module, configured to receive a data request message sent by the user terminal;发送模块,配置为如果检测到缓存装置不包括所述数据请求消息所请求的目标访问数据,则向所述用户终端发送存储装置中的所述目标访问数据;a sending module, configured to: if it is detected that the cache device does not include the target access data requested by the data request message, send the target access data in the storage device to the user terminal;提取模块,配置为提取与所述存储装置中的所述目标访问数据相关的参数信息,并判断所述参数信息是否与预设的参数条件匹配;以及An extraction module, configured to extract parameter information related to the target access data in the storage device, and determine whether the parameter information matches a preset parameter condition;传输模块,配置为如果所述参数信息与所述预设的参数条件匹配,则向所述缓存装置传输所述目标访问数据。And a transmission module configured to transmit the target access data to the cache device if the parameter information matches the preset parameter condition.
- 根据权利要求7所述的数据缓存设备,其中,The data caching device according to claim 7, wherein所述参数信息包括访问次数;并且The parameter information includes the number of accesses; and所述预设的参数条件为:所述访问次数大于或等于预设的第二阈值。The preset parameter condition is that the number of accesses is greater than or equal to a preset second threshold.
- 根据权利要求7所述的数据缓存设备,其中,The data caching device according to claim 7, wherein所述参数信息包括访问次数和访问时间;并且The parameter information includes a number of accesses and an access time;所述预设的参数条件为:所述访问次数大于或等于预设的第二阈值且所述访问时间在预设周期内。The preset parameter condition is that the number of accesses is greater than or equal to a preset second threshold and the access time is within a preset period.
- 根据权利要求7至9中任一项所述的数据缓存设备,还包括:The data caching device according to any one of claims 7 to 9, further comprising:检测模块,配置为检测所述缓存装置的缓存占有率;以及a detecting module configured to detect a cache occupancy rate of the cache device;处理模块,配置为如果所述缓存占用率大于或等于预设的第三阈值,则清除所述缓存装置中的访问次数小于或等于预设的第四阈值的数据和/或向所述存储装置传输所述缓存装置中的已修改的数据。a processing module, configured to: if the cache occupancy rate is greater than or equal to a preset third threshold, clear data that is less than or equal to a preset fourth threshold in the cache device and/or to the storage device Transmitting the modified data in the cache device.
- 根据权利要求7至9中任一项所述的数据缓存设备,还包括:The data caching device according to any one of claims 7 to 9, further comprising:备份模块,配置为将所述缓存装置的内存中的缓存信息冗余备份至所述缓存装置的持久化存储设备;以及a backup module configured to redundantly back up cache information in the memory of the cache device to a persistent storage device of the cache device;恢复模块,配置为如果检测到在所述缓存装置中发生节点故障或系统崩溃,则将所述缓存装置中的持久化的缓存信息恢复至所述缓存装置。And a recovery module configured to restore persistent cache information in the cache device to the cache device if a node failure or a system crash in the cache device is detected.
- 根据权利要求7至9中任一项所述的数据缓存设备,其中,所述发送模块还配置为:如果检测到所述缓存装置包括所述数据请求消息所请求的所述目标访问数据,则向所述用户终端发送所述缓存装置中的所述目标访问数据。The data caching device according to any one of claims 7 to 9, wherein the transmitting module is further configured to: if it is detected that the cache device includes the target access data requested by the data request message, Transmitting the target access data in the cache device to the user terminal.
- 一种存储有程序指令的计算机可读取介质,所述程序指令使计算机执行根据权利要求1至6中任一项所述的方法。A computer readable medium storing program instructions for causing a computer to perform the method of any one of claims 1 to 6.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/487,817 US11226898B2 (en) | 2017-02-21 | 2018-01-24 | Data caching method and apparatus |
EP18757536.0A EP3588913B1 (en) | 2017-02-21 | 2018-01-24 | Data caching method, apparatus and computer readable medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710091829.0A CN108459821B (en) | 2017-02-21 | 2017-02-21 | Data caching method and device |
CN201710091829.0 | 2017-02-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018153202A1 true WO2018153202A1 (en) | 2018-08-30 |
Family
ID=63228886
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/073965 WO2018153202A1 (en) | 2017-02-21 | 2018-01-24 | Data caching method and apparatus |
Country Status (4)
Country | Link |
---|---|
US (1) | US11226898B2 (en) |
EP (1) | EP3588913B1 (en) |
CN (1) | CN108459821B (en) |
WO (1) | WO2018153202A1 (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110968562B (en) * | 2019-11-28 | 2023-05-12 | 国网上海市电力公司 | Cache self-adaptive adjustment method and equipment based on ZFS file system |
CN111752902A (en) * | 2020-06-05 | 2020-10-09 | 江苏任务网络科技有限公司 | Dynamic hot data caching method |
CN113010455B (en) * | 2021-03-18 | 2024-09-03 | 北京金山云网络技术有限公司 | Data processing method and device and electronic equipment |
CN114422807B (en) * | 2022-03-28 | 2022-10-21 | 麒麟软件有限公司 | Transmission optimization method based on Spice protocol |
CN115334158A (en) * | 2022-07-29 | 2022-11-11 | 重庆蚂蚁消费金融有限公司 | Cache management method and device, storage medium and electronic equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1655130A (en) * | 2004-02-13 | 2005-08-17 | 联想(北京)有限公司 | Method for acquisition of data in hard disk |
CN104298560A (en) * | 2013-07-15 | 2015-01-21 | 中兴通讯股份有限公司 | Load sharing system and load sharing method |
CN104539727A (en) * | 2015-01-15 | 2015-04-22 | 北京国创富盛通信股份有限公司 | Cache method and system based on AP platform |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2004104838A1 (en) * | 2003-05-21 | 2004-12-02 | Fujitsu Limited | Data access response system, storage system, client device, cache device, and data access response system access method |
WO2009121413A1 (en) * | 2008-04-03 | 2009-10-08 | Telefonaktiebolaget Lm Ericsson (Publ) | Apparatus and method for providing access to internet resources in a wireless communications network |
US8447948B1 (en) * | 2008-04-25 | 2013-05-21 | Amazon Technologies, Inc | Dynamic selective cache compression |
CN101562543B (en) * | 2009-05-25 | 2013-07-31 | 阿里巴巴集团控股有限公司 | Cache data processing method and processing system and device thereof |
US20110113200A1 (en) * | 2009-11-10 | 2011-05-12 | Jaideep Moses | Methods and apparatuses for controlling cache occupancy rates |
US9003104B2 (en) * | 2011-02-15 | 2015-04-07 | Intelligent Intellectual Property Holdings 2 Llc | Systems and methods for a file-level cache |
US9779029B2 (en) * | 2012-11-06 | 2017-10-03 | Facebook, Inc. | Cache replacement policy for data with strong temporal locality |
CN104580437A (en) * | 2014-12-30 | 2015-04-29 | 创新科存储技术(深圳)有限公司 | Cloud storage client and high-efficiency data access method thereof |
US10678578B2 (en) * | 2016-06-30 | 2020-06-09 | Microsoft Technology Licensing, Llc | Systems and methods for live migration of a virtual machine based on heat map and access pattern |
-
2017
- 2017-02-21 CN CN201710091829.0A patent/CN108459821B/en active Active
-
2018
- 2018-01-24 EP EP18757536.0A patent/EP3588913B1/en active Active
- 2018-01-24 WO PCT/CN2018/073965 patent/WO2018153202A1/en unknown
- 2018-01-24 US US16/487,817 patent/US11226898B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1655130A (en) * | 2004-02-13 | 2005-08-17 | 联想(北京)有限公司 | Method for acquisition of data in hard disk |
CN104298560A (en) * | 2013-07-15 | 2015-01-21 | 中兴通讯股份有限公司 | Load sharing system and load sharing method |
CN104539727A (en) * | 2015-01-15 | 2015-04-22 | 北京国创富盛通信股份有限公司 | Cache method and system based on AP platform |
Non-Patent Citations (1)
Title |
---|
See also references of EP3588913A4 * |
Also Published As
Publication number | Publication date |
---|---|
CN108459821B (en) | 2022-11-18 |
US11226898B2 (en) | 2022-01-18 |
CN108459821A (en) | 2018-08-28 |
EP3588913A4 (en) | 2020-09-23 |
EP3588913A1 (en) | 2020-01-01 |
US20210133103A1 (en) | 2021-05-06 |
EP3588913B1 (en) | 2023-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018153202A1 (en) | Data caching method and apparatus | |
EP3229142B1 (en) | Read cache management method and device based on solid state drive | |
US8341115B1 (en) | Dynamically switching between synchronous and asynchronous replication | |
CN109582223B (en) | Memory data migration method and device | |
US9298633B1 (en) | Adaptive prefecth for predicted write requests | |
US8495304B1 (en) | Multi source wire deduplication | |
US20180095996A1 (en) | Database system utilizing forced memory aligned access | |
US20170242822A1 (en) | Dram appliance for data persistence | |
CN103329111B (en) | Data processing method, device and system based on block storage | |
WO2019127104A1 (en) | Method for resource adjustment in cache, data access method and device | |
CN104935654A (en) | Caching method, write point client and read client in server cluster system | |
EP3316150A1 (en) | Method and apparatus for file compaction in key-value storage system | |
CN107329708A (en) | A kind of distributed memory system realizes data cached method and system | |
CN107852349B (en) | System, method, and storage medium for transaction management for multi-node cluster | |
CN107422989B (en) | Server SAN system multi-copy reading method and storage system | |
US20240231646A1 (en) | Storage System and Method Using Persistent Memory | |
US9298397B2 (en) | Nonvolatile storage thresholding for ultra-SSD, SSD, and HDD drive intermix | |
US9323671B1 (en) | Managing enhanced write caching | |
WO2019109209A1 (en) | Data replacement method for memory, server node, and data storage system | |
US9684598B1 (en) | Method and apparatus for fast distributed cache re-sync after node disconnection | |
US10686906B2 (en) | Methods for managing multi-level flash storage and devices thereof | |
US20240248607A1 (en) | Log Memory Compression System and Method | |
US20230025570A1 (en) | Adaptive throttling of metadata requests | |
CN112363674B (en) | Data writing method and device | |
EP3133496A1 (en) | Cache-aware background storage processes |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18757536 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2018757536 Country of ref document: EP Effective date: 20190923 |