WO2018153202A1 - 数据缓存方法及装置 - Google Patents

数据缓存方法及装置 Download PDF

Info

Publication number
WO2018153202A1
WO2018153202A1 PCT/CN2018/073965 CN2018073965W WO2018153202A1 WO 2018153202 A1 WO2018153202 A1 WO 2018153202A1 CN 2018073965 W CN2018073965 W CN 2018073965W WO 2018153202 A1 WO2018153202 A1 WO 2018153202A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
cache
cache device
target access
preset
Prior art date
Application number
PCT/CN2018/073965
Other languages
English (en)
French (fr)
Inventor
张广艳
杨洪章
吴桂勇
罗圣美
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Priority to US16/487,817 priority Critical patent/US11226898B2/en
Priority to EP18757536.0A priority patent/EP3588913B1/en
Publication of WO2018153202A1 publication Critical patent/WO2018153202A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0811Multiuser, multiprocessor or multiprocessing cache systems with multilevel cache hierarchies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0868Data transfer between cache memory and other subsystems, e.g. storage devices or host systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0891Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches using clearing, invalidating or resetting means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • G06F12/126Replacement control using replacement algorithms with special data handling, e.g. priority of data or instructions, handling errors or pinning
    • G06F12/127Replacement control using replacement algorithms with special data handling, e.g. priority of data or instructions, handling errors or pinning using additional replacement algorithms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • H04L67/5682Policies or rules for updating, deleting or replacing the stored data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/15Use in a specific computing environment
    • G06F2212/154Networked environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/22Employing cache memory using specific memory technology
    • G06F2212/222Non-volatile memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/26Using a specific storage system architecture
    • G06F2212/261Storage comprising a plurality of storage devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/28Using a specific disk cache architecture
    • G06F2212/283Plural cache memories
    • G06F2212/284Plural cache memories being distributed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/31Providing disk cache in a specific location of a storage system
    • G06F2212/314In storage network, e.g. network attached cache
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/40Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection

Definitions

  • the present invention relates to the field of data storage technologies, and in particular, to a data caching method and a data caching device.
  • a distributed storage system is composed of a plurality of storage devices, a cache device, and an input/output (I/O) bus, and each storage device performs data transmission through an I/O bus, and is based on data between the storage devices.
  • I/O input/output
  • Decentralized layout enables efficient and inexpensive data storage, and distributed storage systems are widely used in intensive computing and cloud computing due to their powerful scalability.
  • the conventional data caching method applied to a distributed storage system mainly adopts an on-demand policy, that is, if it is detected that there is no data required by the user in the cache device, the data stored in the storage device is loaded. Go to the cache device in response to the needs of the user terminal.
  • the data transferred into the cache device during the above response process replaces other data of the cache device, and there is also a problem that since the new call-in buffer device is no longer accessed in the subsequent process.
  • the data or access times are small but multiple accesses to the replaced data are required, so the newly loaded data will occupy the storage resources in the cache device so that the storage resources cannot be fully utilized.
  • the buffer operation on the data block requires a large amount of network bandwidth and storage read and write overhead. Therefore, the data caching method applied to the distributed storage system has a problem that the storage resource utilization rate is low.
  • the invention provides a data buffering method and a data buffering device.
  • a data caching method comprising the steps of: receiving a data request message sent by a user terminal; if detecting that the caching device does not include the target access data requested by the data request message, Transmitting, to the user terminal, the target access data in the storage device; extracting parameter information related to the target access data in the storage device, and determining whether the parameter information matches a preset parameter condition; And if the parameter information matches a preset parameter condition, transmitting the target access data to the cache device.
  • the parameter information includes an access number; and, if the parameter information matches the preset parameter condition, transmitting the target access data to the cache device comprises: if The number of accesses is greater than or equal to a preset first threshold, and the target access data is transmitted to the cache device.
  • the parameter information includes an access number and an access time
  • the step of transmitting the target access data to the cache device comprises: if The target access data is transmitted to the cache device when the number of accesses is greater than or equal to a preset second threshold and the access time is within a preset period.
  • the method further comprises: detecting a cache occupancy rate of the cache device; and if the cache occupancy rate is greater than or equal to a preset And a third threshold, the data of the access times in the cache device being less than or equal to a preset fourth threshold is cleared and/or the modified data in the cache device is transmitted to the storage device.
  • the method further includes: redundantly backing up the cache information in the memory of the cache device to the persistent storage device of the cache device; And if it detects that a node failure or a system crash occurs in the cache device, restoring cached information in the cache device to the cache device.
  • the method further comprising: if detecting that the cache device includes the target access data requested by the data request message, The user terminal transmits the target access data in the cache device.
  • a computer readable medium storing program instructions for causing a computer to perform the data caching method described above is provided.
  • a data cache device includes: a receiving module configured to receive a data request message sent by the user terminal; and a sending module configured to detect that the cache device does not include the Transmitting, by the data request message, the target access data, the target access data in the storage device to the user terminal; and the extracting module configured to extract parameter information related to the target access data in the storage device And determining whether the parameter information matches a preset parameter condition; and the transmitting module is configured to transmit the target access data to the cache device if the parameter information matches the preset parameter condition.
  • the parameter information includes a number of accesses; and the preset parameter condition is that the number of accesses is greater than or equal to a preset second threshold.
  • the parameter information includes an access number and an access time; and the preset parameter condition is: the access times are greater than or equal to a preset second threshold, and the access time is within a preset period. .
  • the data caching device further includes: a detecting module configured to detect a cache occupancy rate of the caching device; and a processing module configured to: if the buffer occupancy rate is greater than or equal to a preset third threshold And clearing data in the cache device that is less than or equal to a preset fourth threshold and/or transmitting the modified data in the cache device to the storage device.
  • the data caching device further includes: a backup module configured to redundantly back up cache information in the memory of the cache device to the persistent storage device of the cache device; and a recovery module configured to If it is detected that a node failure or a system crash occurs in the cache device, the cached information in the cache device is restored to the cache device.
  • the sending module is further configured to: if the cache device is detected to include the target access data requested by the data request message, send the location in the cache device to the user terminal The target access data.
  • FIG. 1 is a flow chart of a data caching method in accordance with an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of a data caching method according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of step S102 in a data caching method according to an embodiment of the present invention.
  • step S104 is a schematic diagram of step S104 in a data caching method according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a data caching device according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of a data caching device in accordance with another embodiment of the present invention.
  • FIG. 7 is a schematic diagram of a data caching device in accordance with another embodiment of the present invention.
  • a data caching method includes the following steps.
  • the user terminal and the server use the data request message to access the data, wherein the user terminal may be a mobile phone, a tablet personal computer, a laptop computer, a personal digital assistant (Personal) Digital Assistant (PDA), Mobile Internet Device (MID) or Wearable Device.
  • PDA Personal digital assistant
  • MID Mobile Internet Device
  • the storage mode of the server is distributed storage, and includes a cache device and a storage device.
  • the cache device uses devices with faster read and write speeds, such as Solid State Drives (SSD), while the storage device uses devices with slower read and write speeds, such as Hard Disk Drive (HDD). .
  • SSD Solid State Drives
  • HDD Hard Disk Drive
  • the data caching method separates cold data (data having a small number of accesses) and hot data (data having a large number of accesses), that is, storing the hot data in the SSD and storing the cold data in the In the HDD, based on a server's Hot Spot Detection (HOSD) module, control of data transmission between the cache device and the storage device is performed.
  • HOSD Hot Spot Detection
  • data transmission between the user terminal and the server may be performed through a public network, and the server is internally configured through a cluster network.
  • the data transfer between the cache device and the storage device to implement data flow between the cache device and the storage device.
  • Step S102 If it is detected that the cache device does not include the target access data requested by the data request message, the target access data in the storage device is sent to the user terminal.
  • the server receives the data request message sent by the Objecter module in the user terminal, and determines the requested target access data based on the data request message, and then accesses the target access data. Compare with the data in the cache device. If the cache device does not include the target access data, the target access data in the storage device is transmitted to the user terminal, thereby reducing the impact of the cache operation on the I/O delay.
  • the cache device may include a Filter module, a Promotion module, and an Agent module.
  • the Promotion module is responsible for transmitting the target access data from the storage device to the cache device
  • the Agent module is responsible for transmitting the dirty data (ie, the modified data) in the cache device to the storage device.
  • the device or the cold data in the cache device ie, the data with less access times) is used, thereby improving the utilization of the storage resource and the data access hit rate of the user terminal.
  • step S102 further includes: if it is detected that the cache device includes the target access data requested by the data request message, sending the target access data in the cache device to the user terminal. .
  • Step S103 Extract parameter information of the target access data in the storage device, and determine whether the parameter information of the target access data matches the preset parameter condition.
  • the parameter information may include the number of accesses, and may also include both the number of accesses and the access time. If the parameter information is the number of accesses, it is determined whether the number of accesses is greater than or equal to a preset first threshold, wherein the preset first threshold may be 3, 5, 6, or the like.
  • the parameter information includes the number of accesses and the access time
  • the preset second threshold may be 3 times, 5 times, 6 times, or the like.
  • the access time of the target access data is within a preset period, and the time interval between the time when the target access data was last accessed and the current time is within a preset period.
  • the access time of the target access data is within a preset time period, and may indicate that the time when the target access data was last accessed falls within a preset time period.
  • the preset period may be, for example, one hour, one day, one week, or the like.
  • Step S104 If the parameter information of the target access data matches the preset parameter condition, the target access data is transmitted to the cache device.
  • the process of determining the parameter information of the target access data and the preset parameter condition is referred to step S103.
  • step S104 includes: if the parameter information of the target access data matches the preset parameter condition, transmitting the target access data stored in the storage device to the cache device.
  • steps S103 and S104 are performed asynchronously with steps S101 and S102.
  • step S103 and step S104 can be performed in a predetermined cycle. This allows the cache operation to occur asynchronously, avoiding the impact of cache operations on I/O latency.
  • the parameter information includes the number of accesses. If the parameter information of the target access data matches the preset parameter condition, the step of transmitting the target access data to the cache device includes: if the number of accesses of the target access data is greater than or equal to a preset first threshold, transmitting the target to the cache device
  • the data is accessed, wherein the first threshold may be 3, 5, 6 or the like.
  • the frequently accessed data can be transmitted to the cache device so that the user terminal directly accesses the cache device, and the infrequently accessed data remains in the storage device, thereby reducing the cache device and the storage device.
  • the parameter information includes both the number of accesses and the access time. If the parameter information of the target access data matches the preset parameter condition, the step of transmitting the target access data to the cache device includes: if the number of accesses of the target access data is greater than or equal to a preset second threshold and the access time of the target access data During the preset period, the target access data is transmitted to the cache device. By increasing the judgment of the access time of the data, it is possible to further ensure the efficiency of the target access data.
  • the data caching method further includes: detecting a cache occupancy rate of the cache device after the step of transmitting the target access data to the cache device; and clearing the cache if the cache occupancy rate is greater than or equal to a preset third threshold value
  • the number of accesses in the device is less than or equal to a predetermined fourth threshold and/or the modified data in the cache device is transmitted to the storage device, wherein the third threshold may be 80%, 85%, 90%, or the like.
  • the cache device can have redundant storage space for other data access through the above steps.
  • the data whose access count is less than or equal to the preset fourth threshold is cold data, and the modified data in the cache device is dirty data.
  • the number of accesses and access times of the data can be comprehensively considered to ensure that the data blocks replaced from the cache device have a low hit potential (ie, are unlikely to be accessed again).
  • the implementation process is shown in Figure 4. It is mainly composed of four linked lists, namely MRU, MFU, MRUG and MFUG.
  • MRU linked list One end of the MRU linked list is the MRU end, and the other end is the LRU end.
  • the MFU end of the MFU linked list is the MFU end and the other end is the LFU end.
  • the MFU linked list is also a finite sequence that is sorted according to the access time of the data. The difference is that each time a second hit occurs, the corresponding data in the MFU linked list is placed in the MFU header (MFU end). If there is data to enter the cache, and the number of data blocks in the cache has reached the previously set threshold, the elements are deleted from the LRU and LFU, and the corresponding metadata information is sent to the MFUG queue and the MRUG queue respectively.
  • the MFUG and MRUG do not store data blocks, only the access records of the data blocks.
  • the data block in the MFU linked list is sent to the MFUG linked list, and the storage space occupied by the data block is released. If the data block to be released is in the MRU linked list, the data block is deleted from the MRU linked list and sent to the MRUG linked list.
  • Both the MFUG and MRUG linked lists are first in, first out (FIFO) linked lists with a threshold of x.
  • FIFO first in, first out
  • the HOSD module can dynamically adjust the number of elements that should be included in the MRU and MFU linked lists based on how many false hits occur in the MRUG or MFUG list.
  • the adjustment method is as follows: When a false hit occurs in the MRUG linked list, the length of the MRU linked list is increased by 1, and the length of the MFU linked list is decreased by 1. When a pseudo hit occurs in the MFUG list, the MFU list length is increased by one, and the MRU list length is decreased by one. This ensures that the total length of the MRU and MFU linked lists in the cache remains constant.
  • the method further comprises the steps of: redundantly backing up cache information in the memory of the cache device to the persistent storage device of the cache device after the step of transmitting the target access data to the cache device; and if detecting When a node failure or a system crash occurs in the cache device, the persistent cache information in the cache device is restored to the cache device.
  • the cache metadata information in the memory of the cache device is packaged into an object for backup at regular intervals. The backup data is written to the persistent storage device of the cache device in a checkpoint manner by the write logic in the storage device, wherein the checkpoint is only executed periodically and does not bring load to the system.
  • the cache metadata information is lost due to a node failure or system crash in the cache device
  • the data backed up in the cache device's persistent storage device is restored to the cache device to ensure that in the event of a node failure or system crash It still works, ensuring the system's fault tolerance.
  • the cache device stores only data satisfying the condition of the preset parameters (for example, data with a large number of accesses), and does not allow the data with less access times to occupy the storage space of the cache device, thereby improving The utilization of storage resources and the data access hit ratio of the user terminal.
  • the condition of the preset parameters for example, data with a large number of accesses
  • the data cache device 500 includes: a receiving module 501 for receiving a data request message sent by a user terminal; and a sending module 502, which detects whether the cache device includes a data request message.
  • the target accesses the data, and if it detects that the cache device does not include the target access data requested by the data request message, transmits the target access data in the storage device to the user terminal; and the extraction module 503 is configured to extract the target access in the storage device Parameter information of the data, and determining whether the parameter information of the target access data matches the preset parameter condition; and the transmission module 504, if the parameter information of the target access data matches the preset parameter condition, the transmission module 504 transmits to the buffer device Target access data.
  • the parameter information includes the number of visits. If the number of accesses to the target access data is greater than or equal to the preset first threshold, the transmission module 504 transmits the target access data to the cache device.
  • the parameter information includes the number of visits and the access time. If the number of accesses of the target access data is greater than or equal to the preset second threshold and the access time of the target access data is within the preset period, the transmission module 504 transmits the target access data to the cache device.
  • the sending module 502 is further configured to: if the cache device is detected to include the target access data requested by the data request message, send the user in the cache device to the user terminal Target access data.
  • the data cache device 500 further includes: a detecting module 505 for detecting a cache occupancy rate of the cache device; and a processing module 506 if the cache occupancy rate is greater than or equal to a preset The third threshold, the processing module 506 clears the data in the cache device that the number of accesses is less than or equal to the preset fourth threshold and/or transmits the modified data in the cache device to the storage device.
  • the data cache device 500 further includes: a backup module 507 for redundantly backing up cache information in the memory of the cache device to the persistent storage device of the cache device; and a recovery module 508. If a node failure or system crash occurs in the cache device, the recovery module 508 restores the cached information in the cache device to the cache device.
  • data caching device 500 is included in a caching device.
  • the cache device may include hardware that performs the functions of the respective modules.
  • data caching device 500 is a device that is separate from the caching device and the storage device.
  • the data caching device 500 can be utilized to implement various steps of the data caching method according to the present invention, and the utilization ratio of the storage resource and the data access hit ratio of the user terminal are improved.
  • the program instructions cause a computer to perform a data caching method in accordance with the present invention.
  • the computer readable medium can be, for example, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本发明公开了数据缓存方法及装置,数据缓存方法包括如下步骤:接收用户终端发送的数据请求消息;如果检测到缓存装置不包括数据请求消息所请求的目标访问数据,则向用户终端发送存储装置中的目标访问数据;提取存储装置中的目标访问数据的参数信息,并判断参数信息是否与预设的参数条件匹配;以及如果参数信息与预设的参数条件匹配,则向缓存装置传输目标访问数据。

Description

数据缓存方法及装置 技术领域
本发明涉及数据存储技术领域,特别涉及数据缓存方法及数据缓存装置。
背景技术
分布式存储系统由多个存储装置、缓存装置和输入/输出(I/O,input/output)总线组成,各个存储装置之间通过I/O总线进行数据传输,并且基于存储装置之间的数据分散布局可以实现高效低廉的数据存储,分布式存储系统因其强大的扩展能力而被广泛应用于密集型计算和云计算领域。
常规的应用于分布式存储系统的数据缓存方法主要是采用按需调入的策略,即,如果检测到缓存装置中没有用户所需要的数据时,则会将存储在存储装置中的数据调入到缓存装置中以响应用户终端的需求。然而,由于缓存装置的容量有限,因此在上述响应过程中调入缓存装置中的数据会替换缓存装置的其他数据,并且还存在如下问题:由于在后续过程中不再访问新调入缓存装置中的数据或者访问次数较少但需要多次访问已被替换的数据,因此新调入的数据会占用缓存装置中的存储资源以致不能充分利用这些存储资源。另外,由于分布式存储模式下的缓存粒度较大,对数据块的缓存操作需要大量的网络带宽和存储读写开销。因此,应用于分布式存储系统的数据缓存方法存在存储资源利用率低的问题。
发明内容
本发明提供了一种数据缓存方法及数据缓存装置。
根据本发明的一个实施例,提供一种数据缓存方法,该方法包括如下步骤:接收用户终端发送的数据请求消息;如果检测到缓存 装置不包括所述数据请求消息所请求的目标访问数据,则向所述用户终端发送存储装置中的所述目标访问数据;提取与所述存储装置中的所述目标访问数据相关的参数信息,并判断所述参数信息是否与预设的参数条件匹配;以及如果所述参数信息与预设的参数条件匹配,则向所述缓存装置传输所述目标访问数据。
在一些实施方式中,所述参数信息包括访问次数;并且,如果所述参数信息与所述预设的参数条件匹配,则向所述缓存装置传输所述目标访问数据的步骤包括:如果所述访问次数大于或等于预设的第一阈值,则向所述缓存装置传输所述目标访问数据。
在一些实施方式中,所述参数信息包括访问次数和访问时间,并且如果所述参数信息与所述预设的参数条件匹配,则向所述缓存装置传输所述目标访问数据的步骤包括:如果所述访问次数大于或等于预设的第二阈值且所述访问时间在预设周期内,则向所述缓存装置传输所述目标访问数据。
在一些实施方式中,在向所述缓存装置传输所述目标访问数据的步骤之后,所述方法还包括:检测所述缓存装置的缓存占用率;以及如果所述缓存占用率大于或等于预设的第三阈值,则清除所述缓存装置中的访问次数小于或等于预设的第四阈值的数据和/或向所述存储装置传输所述缓存装置中的已修改的数据。
在一些实施方式中,在接收用户终端发送的数据请求消息的步骤之后,所述方法还包括:将所述缓存装置的内存中的缓存信息冗余备份至所述缓存装置的持久化存储设备;以及如果检测到在所述缓存装置发生节点故障或系统崩溃,则将所述缓存装置中的持久化的缓存信息恢复至所述缓存装置。
在一些实施方式中,在向所述缓存装置传输所述目标访问数据的步骤之后,所述方法还包括:如果检测到缓存装置包括所述数据请求消息所请求的目标访问数据,则向所述用户终端发送所述缓存装置中的所述目标访问数据。
根据本发明的另一个实施例,提供一种存储有程序指令的计算机可读取介质,所述程序指令使计算机执行上述数据缓存方法。
根据本发明的另一个实施例,提供一种数据缓存设备,所述数据缓存设备包括:接收模块,配置为接收用户终端发送的数据请求消息;发送模块,配置为如果检测到缓存装置不包括所述数据请求消息所请求的目标访问数据,则向所述用户终端发送存储装置中的所述目标访问数据;提取模块,配置为提取与所述存储装置中的所述目标访问数据相关的参数信息,并判断所述参数信息是否与预设的参数条件匹配;以及传输模块,配置为如果所述参数信息与所述预设的参数条件匹配,则向所述缓存装置传输所述目标访问数据。
在一些实施方式中,所述参数信息包括访问次数;并且所述预设的参数条件为:所述访问次数大于或等于预设的第二阈值。
在一些实施方式中,所述参数信息包括访问次数和访问时间;并且所述预设的参数条件为:所述访问次数大于或等于预设的第二阈值且所述访问时间在预设周期内。
在一些实施方式中,所述数据缓存设备还包括:检测模块,配置为检测所述缓存装置的缓存占有率;以及处理模块,配置为如果所述缓存占用率大于或等于预设的第三阈值,则清除所述缓存装置中的访问次数小于或等于预设的第四阈值的数据和/或向所述存储装置传输所述缓存装置中的已修改的数据。
在一些实施方式中,所述数据缓存设备还包括:备份模块,配置为将所述缓存装置的内存中的缓存信息冗余备份至所述缓存装置的持久化存储设备;以及恢复模块,配置为如果检测到在所述缓存装置中发生节点故障或系统崩溃,则将所述缓存装置中的持久化的缓存信息恢复至所述缓存装置。
在一些实施方式中,所述发送模块还配置为:如果检测到所述缓存装置包括所述数据请求消息所请求的所述目标访问数据,则向所述用户终端发送所述缓存装置中的所述目标访问数据。
附图说明
图1为根据本发明的实施例的数据缓存方法的流程图;
图2为根据本发明的实施例的数据缓存方法的示意图;
图3为根据本发明的实施例的数据缓存方法中的步骤S102的示意图;
图4为根据本发明的实施例的数据缓存方法中的步骤S104的示意图;
图5为根据本发明的实施例的数据缓存设备的示意图;
图6为根据本发明的另一个实施例的数据缓存设备的示意图;
以及
图7为根据本发明的另一个实施例的数据缓存设备的示意图。
具体实施方式
在下文中,将参考附图对本发明的示例性实施例进行详细描述。如图1所示,根据本发明的实施例的数据缓存方法包括以下步骤。步骤S101:接收用户终端发送的数据请求消息。该步骤中,用户终端与服务器之间利用数据请求消息来进行数据的访问,其中,用户终端可以是手机、平板电脑(Tablet Personal Computer)、膝上型电脑(Laptop Computer)、个人数字助理(Personal Digital Assistant,简称PDA)、移动上网装置(Mobile Internet Device,MID)或可穿戴式设备(Wearable Device)等。服务器的存储方式为分布式存储,并包括缓存装置和存储装置。
如图2所示,缓存装置使用读写速度较快的设备,例如固态驱动器(Solid State Drives,SSD),而存储装置使用读写速度较慢的设备,例如硬盘驱动器(Hard Disk Drive,HDD)。
根据本实施例的一个实例的数据缓存方法是将冷数据(访问次数少的数据)和热数据(访问次数多的数据)进行分离,即,将热数据存储在SSD中而将冷数据存储在HDD中,并基于服务器的热点检测(Hot Spot Detection,HOSD)模块进行缓存装置与存储装置之间的数据传输的控制。需要说明的是,为了确保用户终端与服务器之间的数据请求的稳定性,可以通过外部网(public network)进行用户终端与服务器之间的数据传输,并通过集群网(cluster network)进行服务器内部的缓存装置与存储装置之间的数据传输, 以实现缓存装置和存储装置之间的数据流动。
步骤S102:如果检测到缓存装置不包括数据请求消息所请求的目标访问数据,则向用户终端发送存储装置中的目标访问数据。在该步骤中,如图3所示,服务器(未示出)接收用户终端中的Objecter模块发送的数据请求消息,并基于该数据请求消息确定所请求的目标访问数据,然后将该目标访问数据与缓存装置中的数据进行对比。如果缓存装置不包括目标访问数据,则向用户终端发送存储装置中的目标访问数据,从而减少缓存操作对I/O延时的影响。缓存装置中可包括Filter模块、Promotion模块和Agent模块,Promotion模块负责将目标访问数据从存储装置中传输至缓存装置,Agent模块负责将缓存装置中的脏数据(即被修改的数据)传输至存储装置或清除缓存装置中的冷数据(即访问次数少的数据),从而提高存储资源的利用率和用户终端的数据访问命中率。
在本实施例的一个实例中,步骤S102还包括:如果检测到缓存装置包括所述数据请求消息所请求的目标访问数据,则向所述用户终端发送所述缓存装置中的所述目标访问数据。
步骤S103:提取存储装置中的目标访问数据的参数信息,并判断目标访问数据的参数信息是否与预设的参数条件匹配。在该步骤中,参数信息可以包括访问次数,也可以包括访问次数和访问时间两者。如果参数信息为访问次数,则判断访问次数是否大于或等于预设的第一阈值,其中,预设的第一阈值可以是3次、5次、6次等。
此外,如果参数信息包括访问次数和访问时间,则判断目标访问数据的访问次数是否大于或等于预设的第二阈值且目标访问数据的访问时间是否在预设周期内。这里,预设的第二阈值可以是3次、5次、6次等。在本实施例的一个实例中,目标访问数据的访问时间在预设周期内,表示目标访问数据最近一次被访问的时间与当前时间之间的时间间隔在预设周期内。在本实施例的另一个实例中,目标访问数据的访问时间在预设时间周期内,可表示目标访问数据最近一次被访问的时间落在预设时间周期内。所述预设周期例如可以是一个小时、一天、一周等。通过增加对数据的访问时间的判断, 可以确保目标访问数据的高效性。
步骤S104:如果目标访问数据的参数信息与预设的参数条件匹配,则向缓存装置传输目标访问数据。该步骤中,目标访问数据的参数信息与预设的参数条件匹配的判断过程参见步骤S103。通过将符合预设的参数条件的目标访问数据传输至缓存装置以使存储在缓存装置中的数据为高命中潜力的数据,有效地解决了缓存装置的缓存污染的问题,并提高了缓存的利用率。
在本实施例的一个实例中,步骤S104包括:如果目标访问数据的参数信息与预设的参数条件匹配,则将存储在存储装置中的目标访问数据传输至缓存装置。
在本实施例的一个实例中,步骤S103和S104是与步骤S101和S102异步进行的。例如,可以按预定的周期执行步骤S103和步骤S104。这样让缓存操作异步进行,避免了缓存操作对I/O延迟的影响。
在本实施例的一个实例中,参数信息包括访问次数。如果目标访问数据的参数信息与预设的参数条件匹配,则向缓存装置传输目标访问数据的步骤包括:如果目标访问数据的访问次数大于或者等于预设的第一阈值,则向缓存装置传输目标访问数据,其中,第一阈值可以是3次、5次、6次等。在该实例中,通过上述步骤可以把访问频繁的数据传输至缓存装置以便于用户终端直接访问缓存装置,而让访问不频繁的数据继续留在存储装置中,从而可以减少缓存装置与存储装置之间的数据流动的开销。由于存储在缓存装置中的数据都是访问频繁的数据,因此可以减少缓存污染。
在本实施例的另一个实例中,参数信息包括访问次数和访问时间两者。如果目标访问数据的参数信息与预设的参数条件匹配,则向缓存装置传输目标访问数据的步骤包括:如果目标访问数据的访问次数大于或者等于预设的第二阈值且目标访问数据的访问时间在预设周期内,则向缓存装置传输目标访问数据。通过增加对数据的访问时间的判断,可以进一步确保目标访问数据的高效性。
在一些实施例中,数据缓存方法还包括:在向缓存装置传输目 标访问数据的步骤之后,检测缓存装置的缓存占用率;以及如果缓存占用率大于或等于预设的第三阈值,则清除缓存装置中的访问次数小于或等于预设的第四阈值的数据和/或向存储装置传输缓存装置中的已修改的数据,其中,第三阈值可以是80%、85%、90%等。在该实施例中,通过上述步骤可以使缓存装置具有多余的存储空间以供其他数据访问。访问次数小于或等于预设的第四阈值的数据为冷数据,而缓存装置中的已修改的数据为脏数据。在进行数据替换时,可以综合考虑数据的访问次数和访问时间,以保证从缓存装置中替换出去的数据块都具有较低的命中潜力(即,不太可能被再次访问)。
其实现过程如图4所示,主要由四个链表构成,分别是MRU,MFU,MRUG和MFUG。其中,MRU链表的队列一端为MRU端,另一端为LRU端,MFU链表的队列一端为MFU端,另一端为LFU端。当数据刚进入缓存装置时,先将数据放入到MRU队列中,MRU队列是依据数据块的访问时间进行排序的一个有限序列。当一个新的数据再进入MRU队列时,MRU队列的LRU端的数据块(即,时间最久未被访问的数据块)将被替换出去。如果MRU队列中的某个数据块在被替换之前被二次访问,那么将该数据放入到MFU队列中的MFU端。
MFU链表也是根据数据的访问时间进行排序的一个有限序列。所不同的是,每发生一次二次命中,都是把MFU链表中的对应数据放到MFU头部(MFU端)。如果有数据需要进入缓存中,而此时缓存中的数据块数目已经到了之前设定的阈值,则会从LRU和LFU端删除元素,并将对应元数据信息分别送入MFUG队列和MRUG队列。
MFUG和MRUG并不存储数据块,只存储数据块的访问记录。将MFU链表中的数据块送入MFUG链表,同时释放该数据块所占用的存储空间。如果要释放的数据块在MRU链表中,则将该数据块从MRU链表中删除,并送入MRUG链表。
MFUG和MRUG链表均为先进先出(FIFO)的链表,其长度为阈值x。当链表长度增大到等于x时,将链表中最久的访问记录删除。 当再次访问该数据块的时,如果数据块在MRUG或MFUG链表中,则从存储池读取该数据库,并重新将该数据块插入到MRU或MFU。HOSD模块可以根据在MRUG或MFUG链表中发生伪命中次数的多少来动态地调整MRU和MFU这两个链表应包含的元素的个数。调整方法如下:当在MRUG链表中发生1次伪命中时,则将MRU链表长度增加1,并将MFU链表长度减小1。当在MFUG链表中发生1次伪命中,则将MFU链表长度增加1,并将MRU链表长度减小1。这样能够确保缓存中的MRU和MFU链表的总长度保持恒定。
在一些实施例中,该方法还包括如下步骤:在向缓存装置传输目标访问数据的步骤之后,将缓存装置的内存中的缓存信息冗余备份至缓存装置的持久化存储设备;以及如果检测到在缓存装置中发生节点故障或系统崩溃,则将缓存装置中的持久化的缓存信息恢复至缓存装置。在该实施例中,每隔一段时间将缓存装置的内存中的缓存元数据信息打包成一个对象进行备份。通过存储装置中的写入逻辑将备份数据以检查点的方式写入到缓存装置的持久化存储设备,其中,检查点只是周期性地执行,并不会给系统带来负载。当检测到在缓存装置中发生节点故障或系统崩溃而导致缓存元数据信息丢失时,将备份在缓存装置的持久化存储设备中的数据恢复至缓存装置,以确保在发生节点故障或者系统崩溃时仍然能正常工作,从而确保了系统的容错能力。
利用根据本发明的实施例的数据缓存方法,缓存装置只存储满足预设参数条件的数据(例如访问次数多的数据),而不会让访问次数少的数据占用缓存装置的存储空间,从而提高了存储资源的利用率和用户终端的数据访问命中率。
如图5所示,根据本发明的实施例的数据缓存设备500包括:接收模块501,其用于接收用户终端发送的数据请求消息;发送模块502,其检测缓存装置是否包括数据请求消息所请求的目标访问数据,并且如果检测到缓存装置不包括数据请求消息所请求的目标访问数据,则向用户终端发送存储装置中的目标访问数据;提取模块503,其用于提取存储装置中的目标访问数据的参数信息,并判断目 标访问数据的参数信息是否与预设的参数条件匹配;以及传输模块504,如果目标访问数据的参数信息与预设的参数条件匹配,则传输模块504向缓存装置传输目标访问数据。
在一些实施例中,参数信息包括访问次数。如果目标访问数据的访问次数大于或等于预设的第一阈值,则传输模块504向缓存装置传输目标访问数据。
在一些实施例中,参数信息包括访问次数和访问时间。如果目标访问数据的访问次数大于或等于预设的第二阈值且目标访问数据的访问时间在预设周期内,则传输模块504向缓存装置传输目标访问数据。
在一些实施例中,发送模块502还配置为:如果检测到所述缓存装置包括所述数据请求消息所请求的所述目标访问数据,则向所述用户终端发送所述缓存装置中的所述目标访问数据。
在一些实施例中,如图6所示,数据缓存设备500还包括:检测模块505,其用于检测缓存装置的缓存占有率;以及处理模块506,如果该缓存占用率大于或者等于预设的第三阈值,则处理模块506清除缓存装置中访问次数小于或等于预设的第四阈值的数据和/或向存储装置传输缓存装置中的已修改的数据。
在一些实施例中,如图7所示,数据缓存设备500还包括:备份模块507,其用于将缓存装置的内存中的缓存信息冗余备份至缓存装置的持久化存储设备;以及恢复模块508,如果检测到在缓存装置发生节点故障或者系统崩溃,则恢复模块508将缓存装置中的持久化的缓存信息恢复至缓存装置。
在一些实施例中,数据缓存设备500包含在缓存装置中。在这种情况下,缓存装置可包括执行各个模块的功能的硬件。
在一些实施例中,数据缓存设备500是与缓存装置和存储装置相独立的设备。
需要说明的是,可以利用数据缓存设备500实现根据本发明的数据缓存方法的各个步骤,并且提高了存储资源的利用率和用户终端的数据访问命中率。
本领域普通技术人员可以理解的是,可以利用与程序指令相关的硬件来实现上述实施例的全部或者部分步骤和/或模块,并且可以将程序指令存储到计算机可读取介质中。程序指令使计算机执行根据本发明的数据缓存方法。计算机可读取介质可以为例如只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或光盘等。
对于本领域的普通技术人员来说,可以在不脱离本发明较广义的精神和范围的情况下对本发明进行多种变型和修改。因此,应该认为本说明书和附图是示例性的而不是限制性的。

Claims (13)

  1. 一种数据缓存方法,包括如下步骤:
    接收用户终端发送的数据请求消息;
    如果检测到缓存装置不包括所述数据请求消息所请求的目标访问数据,则向所述用户终端发送存储装置中的所述目标访问数据;
    提取与所述存储装置中的所述目标访问数据相关的参数信息,并判断所述参数信息是否与预设的参数条件匹配;以及
    如果所述参数信息与所述预设的参数条件匹配,则向所述缓存装置传输所述目标访问数据。
  2. 根据权利要求1所述的方法,其中,
    所述参数信息包括访问次数,并且
    如果所述参数信息与所述预设的参数条件匹配,则向所述缓存装置传输所述目标访问数据的步骤包括:如果所述访问次数大于或等于预设的第一阈值,则向所述缓存装置传输所述目标访问数据。
  3. 根据权利要求1所述的方法,其中,
    所述参数信息包括访问次数和访问时间,并且
    如果所述参数信息与所述预设的参数条件匹配,则向所述缓存装置传输所述目标访问数据的步骤包括:如果所述访问次数大于或等于预设的第二阈值且所述访问时间在预设周期内,则向所述缓存装置传输所述目标访问数据。
  4. 根据权利要求1至3中任一项所述的方法,其中,在向所述缓存装置传输所述目标访问数据的步骤之后,所述方法还包括:
    检测所述缓存装置的缓存占用率;以及
    如果所述缓存占用率大于或等于预设的第三阈值,则清除所述缓存装置中的访问次数小于或等于预设的第四阈值的数据和/或向所述存储装置传输所述缓存装置中的已修改的数据。
  5. 根据权利要求1至3中任一项所述的方法,其中,在接收用户终端发送的数据请求消息的步骤之后,所述方法还包括:
    将所述缓存装置的内存中的缓存信息冗余备份至所述缓存装置的持久化存储设备;以及
    如果检测到在所述缓存装置发生节点故障或系统崩溃,则将所述缓存装置中的持久化的缓存信息恢复至所述缓存装置。
  6. 根据权利要求1至3中任一项所述的方法,其中,在向所述缓存装置传输所述目标访问数据的步骤之后,所述方法还包括:
    如果检测到缓存装置包括所述数据请求消息所请求的目标访问数据,则向所述用户终端发送所述缓存装置中的所述目标访问数据。
  7. 一种数据缓存设备,包括:
    接收模块,配置为接收用户终端发送的数据请求消息;
    发送模块,配置为如果检测到缓存装置不包括所述数据请求消息所请求的目标访问数据,则向所述用户终端发送存储装置中的所述目标访问数据;
    提取模块,配置为提取与所述存储装置中的所述目标访问数据相关的参数信息,并判断所述参数信息是否与预设的参数条件匹配;以及
    传输模块,配置为如果所述参数信息与所述预设的参数条件匹配,则向所述缓存装置传输所述目标访问数据。
  8. 根据权利要求7所述的数据缓存设备,其中,
    所述参数信息包括访问次数;并且
    所述预设的参数条件为:所述访问次数大于或等于预设的第二阈值。
  9. 根据权利要求7所述的数据缓存设备,其中,
    所述参数信息包括访问次数和访问时间;并且
    所述预设的参数条件为:所述访问次数大于或等于预设的第二阈值且所述访问时间在预设周期内。
  10. 根据权利要求7至9中任一项所述的数据缓存设备,还包括:
    检测模块,配置为检测所述缓存装置的缓存占有率;以及
    处理模块,配置为如果所述缓存占用率大于或等于预设的第三阈值,则清除所述缓存装置中的访问次数小于或等于预设的第四阈值的数据和/或向所述存储装置传输所述缓存装置中的已修改的数据。
  11. 根据权利要求7至9中任一项所述的数据缓存设备,还包括:
    备份模块,配置为将所述缓存装置的内存中的缓存信息冗余备份至所述缓存装置的持久化存储设备;以及
    恢复模块,配置为如果检测到在所述缓存装置中发生节点故障或系统崩溃,则将所述缓存装置中的持久化的缓存信息恢复至所述缓存装置。
  12. 根据权利要求7至9中任一项所述的数据缓存设备,其中,所述发送模块还配置为:如果检测到所述缓存装置包括所述数据请求消息所请求的所述目标访问数据,则向所述用户终端发送所述缓存装置中的所述目标访问数据。
  13. 一种存储有程序指令的计算机可读取介质,所述程序指令使计算机执行根据权利要求1至6中任一项所述的方法。
PCT/CN2018/073965 2017-02-21 2018-01-24 数据缓存方法及装置 WO2018153202A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/487,817 US11226898B2 (en) 2017-02-21 2018-01-24 Data caching method and apparatus
EP18757536.0A EP3588913B1 (en) 2017-02-21 2018-01-24 Data caching method, apparatus and computer readable medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710091829.0A CN108459821B (zh) 2017-02-21 2017-02-21 一种数据缓存的方法及装置
CN201710091829.0 2017-02-21

Publications (1)

Publication Number Publication Date
WO2018153202A1 true WO2018153202A1 (zh) 2018-08-30

Family

ID=63228886

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/073965 WO2018153202A1 (zh) 2017-02-21 2018-01-24 数据缓存方法及装置

Country Status (4)

Country Link
US (1) US11226898B2 (zh)
EP (1) EP3588913B1 (zh)
CN (1) CN108459821B (zh)
WO (1) WO2018153202A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110968562B (zh) * 2019-11-28 2023-05-12 国网上海市电力公司 一种基于zfs文件系统的缓存自适应调整方法及设备
CN111752902A (zh) * 2020-06-05 2020-10-09 江苏任务网络科技有限公司 动态热数据缓存方法
CN114422807B (zh) * 2022-03-28 2022-10-21 麒麟软件有限公司 一种基于Spice协议的传输优化方法
CN115334158A (zh) * 2022-07-29 2022-11-11 重庆蚂蚁消费金融有限公司 一种缓存管理方法、装置、存储介质及电子设备

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1655130A (zh) * 2004-02-13 2005-08-17 联想(北京)有限公司 一种获取硬盘中数据的方法
CN104298560A (zh) * 2013-07-15 2015-01-21 中兴通讯股份有限公司 一种负荷分担系统及方法
CN104539727A (zh) * 2015-01-15 2015-04-22 北京国创富盛通信股份有限公司 一种基于ap平台的缓存方法和系统

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4303688B2 (ja) * 2003-05-21 2009-07-29 富士通株式会社 データアクセス応答システムおよびデータアクセス応答システムへのアクセス方法
WO2009121413A1 (en) * 2008-04-03 2009-10-08 Telefonaktiebolaget Lm Ericsson (Publ) Apparatus and method for providing access to internet resources in a wireless communications network
US8447948B1 (en) * 2008-04-25 2013-05-21 Amazon Technologies, Inc Dynamic selective cache compression
CN101562543B (zh) * 2009-05-25 2013-07-31 阿里巴巴集团控股有限公司 一种缓存数据的处理方法、处理系统和装置
US20110113200A1 (en) * 2009-11-10 2011-05-12 Jaideep Moses Methods and apparatuses for controlling cache occupancy rates
US9003104B2 (en) * 2011-02-15 2015-04-07 Intelligent Intellectual Property Holdings 2 Llc Systems and methods for a file-level cache
US9779029B2 (en) * 2012-11-06 2017-10-03 Facebook, Inc. Cache replacement policy for data with strong temporal locality
CN104580437A (zh) * 2014-12-30 2015-04-29 创新科存储技术(深圳)有限公司 一种云存储客户端及其高效数据访问方法
US10678578B2 (en) * 2016-06-30 2020-06-09 Microsoft Technology Licensing, Llc Systems and methods for live migration of a virtual machine based on heat map and access pattern

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1655130A (zh) * 2004-02-13 2005-08-17 联想(北京)有限公司 一种获取硬盘中数据的方法
CN104298560A (zh) * 2013-07-15 2015-01-21 中兴通讯股份有限公司 一种负荷分担系统及方法
CN104539727A (zh) * 2015-01-15 2015-04-22 北京国创富盛通信股份有限公司 一种基于ap平台的缓存方法和系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3588913A4 *

Also Published As

Publication number Publication date
EP3588913A1 (en) 2020-01-01
US11226898B2 (en) 2022-01-18
EP3588913B1 (en) 2023-05-10
CN108459821A (zh) 2018-08-28
CN108459821B (zh) 2022-11-18
US20210133103A1 (en) 2021-05-06
EP3588913A4 (en) 2020-09-23

Similar Documents

Publication Publication Date Title
WO2018153202A1 (zh) 数据缓存方法及装置
EP3229142B1 (en) Read cache management method and device based on solid state drive
US8341115B1 (en) Dynamically switching between synchronous and asynchronous replication
US9298633B1 (en) Adaptive prefecth for predicted write requests
US8495304B1 (en) Multi source wire deduplication
US20180095996A1 (en) Database system utilizing forced memory aligned access
CN109582223B (zh) 一种内存数据迁移的方法及装置
US20170242822A1 (en) Dram appliance for data persistence
CN104077380B (zh) 一种重复数据删除方法、装置及系统
CN104935654A (zh) 一种服务器集群系统中的缓存方法、写入点客户端和读客户端
WO2019127104A1 (zh) 高速缓存中资源调整方法、数据访问方法及装置
EP3316150A1 (en) Method and apparatus for file compaction in key-value storage system
CN107329708A (zh) 一种分布式存储系统实现缓存数据的方法及系统
CN107422989B (zh) 一种Server SAN系统多副本读取方法及存储系统
CN107852349B (zh) 用于多节点集群的事务管理的系统、方法及存储介质
US9298397B2 (en) Nonvolatile storage thresholding for ultra-SSD, SSD, and HDD drive intermix
US9323671B1 (en) Managing enhanced write caching
CN110121874B (zh) 一种存储器数据替换方法、服务器节点和数据存储系统
US20160274793A1 (en) Storage apparatus, storage control method, and computer-readable recording medium for recording storage control program
US11941253B2 (en) Storage system and method using persistent memory
US9684598B1 (en) Method and apparatus for fast distributed cache re-sync after node disconnection
US10686906B2 (en) Methods for managing multi-level flash storage and devices thereof
US20150088826A1 (en) Enhanced Performance for Data Duplication
US11256439B2 (en) System and method for parallel journaling in a storage cluster
CN113268395A (zh) 业务数据的处理方法、处理装置及终端

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18757536

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018757536

Country of ref document: EP

Effective date: 20190923