WO2015100653A1 - Procédé, dispositif et système de mise en cache de données - Google Patents

Procédé, dispositif et système de mise en cache de données Download PDF

Info

Publication number
WO2015100653A1
WO2015100653A1 PCT/CN2013/091194 CN2013091194W WO2015100653A1 WO 2015100653 A1 WO2015100653 A1 WO 2015100653A1 CN 2013091194 W CN2013091194 W CN 2013091194W WO 2015100653 A1 WO2015100653 A1 WO 2015100653A1
Authority
WO
WIPO (PCT)
Prior art keywords
cache
memory
data
level cache
storage space
Prior art date
Application number
PCT/CN2013/091194
Other languages
English (en)
Chinese (zh)
Inventor
林宇
王宇
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201380002567.6A priority Critical patent/CN103858112A/zh
Priority to PCT/CN2013/091194 priority patent/WO2015100653A1/fr
Publication of WO2015100653A1 publication Critical patent/WO2015100653A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0811Multiuser, multiprocessor or multiprocessing cache systems with multilevel cache hierarchies

Definitions

  • the present invention relates to the field of IT, and in particular to a storage technology. Background technique
  • a cache is used to temporarily store data.
  • the host reads data
  • the data is first read from the disk into the controller's cache and then sent to the host.
  • the host writes data to the disk, it first sends the data to the cache, and then writes the data from the cache to the host.
  • a common storage medium as a cache such as a Synchronous Dynamic Random Access Memory (SDRAM)
  • SDRAM Synchronous Dynamic Random Access Memory
  • some manufacturers use multi-level cache technology, the original SDRAM as a level 1 cache, and add relatively inexpensive Flash memory media such as Solid State Disk (SSD) as a second level outside the SDRAM.
  • the cache called SSD Cache.
  • the SSD Cache is located between the SDRAM and the disk. When the SDRAM space is insufficient, the data in the SDRAM is forwarded to the SSD Cache, and then the SSD Cache writes the data to the disk.
  • the invention provides a data caching technology, which can improve the hit rate of the second level cache.
  • the present invention provides a data caching method, which is applied to a controller, where the controller is connected to a storage device, the controller includes a level 1 cache, and the storage device includes a secondary buffer, the second level.
  • Cache is used to store data that is sent to the memory by the first level cache, and the method includes: querying a hit rate of the second level cache to the read request; determining whether the hit ratio is lower than a capacity expansion threshold, and if the value is lower than the expansion threshold, Obtaining a storage space in the storage for use by the secondary cache, where the storage space obtained from the storage is a newly added storage space; using the newly added storage space in the secondary cache, buffering the first-level slow transmission Store data.
  • the present invention provides a data cache device for managing a storage space of a storage device, where the storage device includes a secondary cache and a memory, and the device includes: a hit ratio query module, configured to query a secondary cache to read The hit rate of the request, the level 2 cache is used to cache the data sent by the level 1 cache; the expansion module is configured to determine whether the hit ratio is lower than the capacity expansion threshold, and if the volume is lower than the capacity expansion threshold, the storage space is obtained from the memory. Used for the secondary cache, where the storage space obtained from the storage is a newly added storage space; a cache module, configured to use the newly added storage space, and buffer the first-level cache to send to the storage Store data.
  • the invention uses the storage space of the memory to expand the secondary cache, and improves the hit rate of the secondary cache.
  • Figure 1 is a structural view of an embodiment of the present invention
  • FIG. 2 is a flow chart of an embodiment of a data caching method of the present invention.
  • FIG. 3 is a structural diagram of an embodiment of a data buffer device of the present invention. detailed description
  • the storage system of the present invention is composed of a controller and a storage device.
  • the storage device provides storage space
  • the controller provides management of the storage device, and provides read/write access to the host.
  • the storage device may be invisible.
  • the storage device and the controller can be physically separate devices; they can also be integrated into one device, and when they are integrated into one device, they can be called storage servers.
  • the controller includes a processor and a cache, and the cache in the controller is referred to as a level cache in the embodiment of the present invention.
  • the storage device is composed of a cache and a plurality of memories.
  • the cache provided by the storage device is referred to as a secondary cache.
  • the data sent by the controller to the storage device is first stored in the level 1 cache and stored in the level 1 cache.
  • the data in the Level 1 cache can periodically transfer relatively cold data to the Level 2 cache based on the degree of heat and cold of the data.
  • the data in the L2 cache periodically transfers relatively cold data to the memory.
  • the process of transferring relatively cold data from a higher-level storage device to a lower-level storage device is also referred to as elimination.
  • the controller may send a write completion response message to the host to inform the host that the write operation has been completed; or after the data is written into the memory, send a write completion response message to the host. , tells the host that the write operation has been completed.
  • the level 1 cache and the level 2 cache may be the same storage medium or different.
  • the L1 cache read/write speed can be faster and the cost is higher than the L2 cache.
  • the L1 cache is
  • the storage medium used by the level 1 cache and the level 2 cache may be a random access memory (RAM) or a nonvolatile storage. But for the controller, they are used as volatile storage.
  • the data stored in the L1 cache and the L2 cache can be eliminated by the aging algorithm, for example, using the identification method of hot and cold data to eliminate them. Utilization.
  • the level 1 cache can be subdivided into a plurality of different small levels, and different storage media can be used for each level, and the small-level storage mediums are connected in series, and the data is transmitted through the elimination algorithm.
  • the controller when receiving the read data of the host, preferentially acquires the read request data from the level 1 cache and the level 1 cache, and if not found, then obtains the read request data in the memory. Finding the read request data in a certain medium is called hit, at a certain number In a read request, the ratio of a medium hit is called the hit ratio of the medium. The hit ratio is used to record the number or proportion of successful reads when reading the pending data from the secondary cache.
  • the level 1 cache sends the read request data to the host; if the level 2 cache is hit, the read request data is from the level 2 cache. The first level cache is sent to the host, and then sent to the host from the level 1 cache. If the memory is hit, the read request data is sent from the memory to the level 1 cache and then sent from the level 1 cache to the host.
  • the storage space of the L2 cache SSD is limited.
  • the L2 cache hit rate is often reduced.
  • this problem can be partially solved by increasing the total capacity of the L2 cache, this increase in the total capacity will cause the total utilization of the L2 cache to decrease, which is contrary to the goal of reducing the cost of the L2 cache.
  • the storage device is installed at the factory, it has installed various components such as L2 cache and memory. Users want to add extra cache. There may not be enough space inside the storage device to install. Even with space, the process of installing additional caches is complex and difficult for ordinary users to implement. In the embodiment of the present invention, the symbol "Yes" or ".”
  • the embodiment of the invention can temporarily increase the storage space of the second-level cache, meet the sudden demand, and is convenient to implement, and the cost increase is not obvious.
  • FIG. 1 it is a structural diagram of an embodiment of the present invention.
  • the host 1 accesses the storage device 3 through the controller 2, and the processor 21 in the controller 2 communicates with the level 1 cache 22.
  • the primary cache 22 communicates with the secondary cache 31, the primary cache and the memory communication 32, the memory 34, and the memory 34 communicate, and the primary cache 22 communicates with the host 1.
  • the controller 2 can exchange data through the secondary cache 31 and the memory in the storage device 3; optionally, the secondary cache 31 can also be bypassed, and the primary cache 22 accesses the memory through the direct connection channel.
  • the following steps can be divided: the host sends a write request to the level 1 cache 22; the level cache 22 can be an SDRAM; then the level 1 cache 22 caches the data to the level 2 cache 3 1 Then, the secondary cache writes data into the storage device 3.
  • the storage device 3 includes a plurality of types of memories, for example, the memory 32 is an SSD, the memory 33 is a SAS (Serial Attached SCSI) disk, and the memory 34 is a SATA (Serial ATA) disk, and the read and write speeds of the three memories are sequentially lowered.
  • the storage device 3 stores data in storage media of different speeds according to different degrees of data heat and cold.
  • the hot data is stored in the SSD
  • the normal data is stored in the SAS disk
  • the cold data is stored in the SATA disk.
  • the degree of heat and cold of data can be described as the frequency with which data is accessed. The more frequently the data is accessed, the hotter it is, and vice versa.
  • the storage space 321 in the storage 32 is used as the storage space 311 in the secondary cache 31, and the storage space of the secondary cache 31 is increased.
  • the L2 cache hit rate is too high, and this part of the moved space can be returned to the memory 32.
  • the storage space originally belonging to the L2 cache 31 may be transferred to the memory 32 even when the SSD Cache hit ratio is higher than the preset value or when the storage space of the memory 32 is insufficient. If the storage space of the memory 33 is moved to the secondary cache 31, the cache 31 will be composed of different storage media.
  • Table 1 is an example of a policy.
  • the controller 21 periodically queries the hit rate of the L2 cache 31 (specifically, the query can be executed by the processor 21).
  • the lower the hit ratio the smaller the hit rate needs to be from the SSD memory 32.
  • space can also be allocated by capacity, for example, for every 5% drop in hit rate, 1 GB (gigabit) of storage space is allocated from SSD memory 32 to secondary cache 31. It is also possible to divide the space into the secondary buffer 31 by an integer multiple of a fixed size, and detect the change in the hit ratio after the division until the condition is satisfied. For example, the storage space in the SSD memory 32 is allocated to the secondary cache 31 by an integer multiple of 100 MB (megabits) until the hit rate reaches 95%, and the allocation is stopped.
  • Figure 2 provides a flow chart of an embodiment of a data caching method.
  • the method embodiments of the present invention may be implemented by the controller 2 executing the method steps, and in particular may be implemented by the processor 21 of the controller 2 of Fig. 1.
  • the first level cache 22 stores computer instructions, and the processor 21 executes the following method by executing computer instructions of the level one cache 22.
  • the storage space for storing this computer instruction can also be separated as a separate running memory.
  • Step 11 Query the hit rate of the secondary cache to the read request.
  • the hit ratio can describe the amount of read request data from the secondary cache when processing a read request.
  • the read request data is also the data requested to be read by the read request.
  • the data stored in the L2 cache is hotter than in the memory.
  • the read request data for a period of time can be found from the secondary cache if it cannot be found from the L1 cache.
  • the controller can preferentially find the level 1 cache. If the data with the read is not found in the level 1 cache, the second level cache is further searched. If the level 2 cache is not found, the memory is searched. If the storage capacity of the L2 cache is insufficient and there is not enough hot data stored, then there will be some read request data not found in the L1 cache, and not found in the L2 cache. The controller needs to obtain from the memory. Read the request data. This means that the secondary cache is not hit. For example, if, within a period of time, 80 of the 100 read requests to the L2 cache are found in the L2 cache and 20 are not found, the hit rate is 80%. This step can be performed periodically or after each read/write request.
  • Step 12 If the hit ratio is lower than the capacity expansion threshold, the storage space is used from the memory to be used by the second level cache, thereby expanding the storage space of the second level cache.
  • the storage device includes a plurality of storage media of a specification, one of which is the same as the secondary cache, for example, both are SSD memories.
  • the storage space of the SSD memory can be transferred to the secondary cache.
  • the entire SSD disk can be used as a range that can be used, or a part of the shared space can be opened in the SSD memory, and it can be used to the secondary cache within the shared space.
  • the controller records and manages the storage space of each memory, and the controller also records and manages the storage space of the caches (e.g., the primary cache and the secondary cache), such as the addresses of various types of storage media in the memory and cache.
  • the storage medium address originally marked as belonging to the memory may be marked by the controller as the storage medium address belonging to the secondary cache.
  • the identifier can also be used to record which memory the storage space comes from. When the free space of the secondary cache is too large, the storage space originally belonging to the storage device can be preferentially returned to the storage device.
  • the hit ratio is used to describe the extent to which the L2 cache is utilized, so that when the L2 cache utilization is reduced, the L2 cache is expanded. Therefore, the performance of the hit rate can be implemented in a variety of ways, except expressed as a percentage. It can also be expressed by the size of the storage space. For example, in the process of obtaining data from the secondary cache, if the average data of 100 MB (megabytes) per second cannot be found, the hit rate is considered to reach the capacity expansion threshold, and the secondary cache can be Expand the capacity.
  • the hit rate can also be expressed in terms of the number of read requests. For example, when more than 10 read requests for the L2 cache are not successfully responded to each second, it is considered that the expansion threshold is met, and the storage space of the L2 cache is expanded.
  • Step 13 Use the new storage space in the L2 cache to cache the data.
  • the relatively cold data can be transferred to the newly added storage space of the secondary cache. After the transferred data is cached in the secondary cache, it can be used by subsequent read requests to improve the hit rate of the secondary cache.
  • the cache is used as a verb, it means temporary storage.
  • the storage space occupied by the cached data can be replaced by new data. After receiving the instruction to cancel the cache, the storage space occupied for buffering data can be interpreted.
  • the medium of the second level cache may be either volatile or non-volatile. It can be logically set to be volatile and free up storage space for caching new data.
  • Step 14 Send the data buffered by the second level cache to the next level storage device.
  • the next level of memory is, for example, a memory.
  • the colder data can be transferred to the memory according to the elimination algorithm, and the relatively hot data is retained in the secondary cache.
  • the storage device is composed of a plurality of memories, and different levels of different memories are different.
  • the hot and the cold there is also a difference between the hot and the cold.
  • the hot data is preferentially stored in the high-speed memory of the storage device, and the cold data is preferentially stored in the low-speed memory of the storage device. It is also possible to analyze the importance of the data, and the more important the data is stored in the more reliable memory.
  • the storage space occupied by the data can be translated. So that the L2 cache can cache new data again. There are two ways to release the data, one is to actively erase the storage space occupied by the data; the other can mark the storage space occupied by the data as being able to be written again.
  • Step 15 When the hit rate is raised to the volume reduction threshold, the newly added storage space is returned to the memory. Step 15 is an optional step.
  • the storage space originally belonging to the memory is shared for use in the cache.
  • the cached storage space may also be shared for memory use; third-party storage space independent of the storage device and the cache may also be set, and used by the memory and the cache.
  • the embodiment of the present invention introduces a technique for buffering and memory sharing storage space, and the cache is divided into two levels.
  • the level 1 cache or the level 2 cache may be only one level or subdivided into more levels, and the read and write speeds of different levels are different.
  • the storage medium flash memory (Flash Memory) of the same specification exists in the cache and the memory, and in other embodiments, the storage space may be shared between the storage media of different specifications.
  • the memory is composed of a disk
  • the secondary cache is composed of an SSD
  • the disk space of the memory is shared by the secondary cache
  • the memory is composed of an SSD and a disk
  • the primary cache is composed of SDRAM, and the SSD or the disk in the memory is used.
  • the storage space is shared with the primary cache, and the controller uses the storage space shared with itself and the original SDRAM as the primary cache.
  • the storage space of the L2 cache provided by the controller may be 0. That is to say, the memory does not provide the second level cache, and the storage space of the storage device is directly provided as the second level cache, which simplifies the structure of the controller.
  • a direct connection channel may also be set between the level 1 cache and the memory. When the level 2 cache space is insufficient, the level 2 cache is bypassed by the direct connection channel, and the data exchanged between the level 1 cache and the memory is transmitted through the direct connection channel.
  • the data cache device 4 includes a level 1 cache 41, a hit ratio query module 42, a expansion module 43, a cache module 44, and a sending module 45.
  • the data buffer device 4 is connected to the storage device 3, and the storage device 3 includes a secondary cache 31 and a memory 32.
  • Level 1 cache 41 used to cache data. Reading and writing data can be faster than L2 cache 31 and storage device 3.
  • the secondary cache 31 is connected to the primary cache 41 and the storage device 3.
  • the write data between the level 1 cache and the memory may be relayed, and the data is first transferred from the level 1 cache to the level 2 cache, and then transferred from the level 2 cache to the disk.
  • the level 2 cache 31 can read and write data faster than the memory 3.
  • the hit rate query module 42 is used to query the hit rate of the second level cache 31.
  • the data carried in the write request passes through the controller and reaches the L1 cache 41, and the L1 cache 41 transfers the relatively cold data through the elimination algorithm. Go to the secondary cache 31.
  • the data in the secondary cache 31 is periodically phased out, and the colder data is transferred to the memory 32.
  • the memory 32 is divided into different levels, and the hotter data is stored in the memory 32 having a faster read speed.
  • the data stored in the L1 cache 41 is the hottest data
  • the L2 cache is 31 times
  • the memory 32 has a higher data read speed
  • the memory with the lowest data read speed is the coldest. The data.
  • the L2 cache 31 can be bypassed, and the data requested by the host is directly read from the memory 32 to the L1 cache. 41. For example, if, within a certain period of time, 80 of the average of 100 read requests to the L2 cache 31 have found the requested data from the L2 cache 31, and 20 read requests are not found, the hit rate is 80. %.
  • the query operation of the hit ratio query module 42 may be performed periodically, or may be performed each time a write request is received.
  • the expansion module 43 is configured to determine whether the hit ratio is lower than a preset expansion threshold, and if the value is lower than the expansion threshold, the storage space is obtained from the memory 32 of the storage device 3 for use by the secondary cache 31, wherein the secondary storage 31 is used.
  • the storage space obtained in 32 is called new storage space.
  • the memory 32 includes a plurality of specifications of storage media, one of which is the same as the secondary cache 31, such as a solid state drive SSD memory using flash memory as a storage medium.
  • the storage space of the SSD memory can be transferred to the secondary cache 31 for use.
  • the entire SSD disk can be used as a range that can be used, or a part of the shared space can be opened in the SSD memory, and it is used in the shared space to the secondary cache 31.
  • the expansion module 43 records the storage space of the memory 32 and manages the storage space, and the expansion module 43 also records the storage space of the cache (including the primary cache 41 and the secondary cache 31) and manages, for example, recording Memory 32 and the address of various types of storage media in the cache.
  • Data can be written to the storage medium by writing data according to the address provided by the storage medium.
  • the address originally marked as belonging to the memory 32 can be marked by the controller as belonging to the secondary cache 31 to effect the transfer of the storage space.
  • the storage space may be recorded by the identifier from the memory 32. When the free space of the secondary cache 31 is too large, the storage space originally belonging to the memory 32 may be preferentially returned to the memory 32.
  • the hit ratio is used to describe the availability of the L2 cache 31 to initiate the expansion of the L2 cache 31 when the L2 cache utilization is reduced (up to the preset expansion threshold).
  • the expansion module 43 may return the newly added storage space to the memory 32.
  • the cache module 44 is configured to cache data from the first level cache 41 by using the newly added storage space in the second level cache 31.
  • the cache module 44 has a scheduling function, which can schedule the storage space in the secondary cache 31, and the cache module 44 itself can have no storage space.
  • the cache module 44 can also instruct the data in the level 1 cache 41 to be directly sent to the memory 32 without passing through the level 2 cache 31.
  • the secondary cache 31 is The write data is obtained in the level 1 cache 41 and then cached.
  • the medium of the second level cache 31 may be either volatile or non-volatile. It can be logically set to be volatile.
  • the device may receive a read request for the stored data, and the cache module 44 may obtain the stored data from the newly added storage space and send the storage data to the first level cache.
  • the cache module 44 is further configured to: when the secondary cache fails to hit, obtain data from the memory to the primary cache through a direct connection channel.
  • the cache module 44 is further configured to send the data buffered in the newly added storage space to the secondary cache 31 memory 32. This feature is optional.
  • the hot data can be preferentially stored in the high speed memory 32 in accordance with the degree of heat and cold of the data, and the cold data is preferentially stored in the low speed memory 32. It is also possible to analyze the importance of the data, and the more important data is stored in the memory 32 with higher reliability.
  • the storage space occupied by the data can be used for the data to be cached for the next time; or it can be uninterpreted for subsequent reading. There are two ways to release the data, one is to actively erase the storage space occupied by the data; the other can mark the storage space occupied by the data as being able to be written again.
  • the data caching device 4 may not include the L1 cache 41, that is, the data caching device 4 has the functions of the hit rate query module 42, the expansion module 43, the cache module 44, and the transmitting module 45.
  • the data cache device 4 is software or hardware integrated in the controller, and the controller further includes a level 1 cache 41.
  • the storage space originally belonging to the memory 32 is shared for use in the secondary cache 31.
  • the storage space of the L2 cache 31 can also be shared with the memory 32.
  • Third-party storage space independent of the storage 32 and the L2 cache 31 can also be set.
  • the embodiment of the present invention introduces a technique for sharing storage space between a cache and a memory 32.
  • the cache is divided into two levels, and the memory 32 is also composed of different memories.
  • the controller manages the L2 cache 31 and the memory 32 in a similar manner.
  • the controller manages the L1 cache 41 in a different manner. Therefore, it is easier to transfer the storage space of the memory 32 to the secondary cache 31 than to transfer the storage space of the memory 32 to the cache 41.
  • aspects of the invention, or aspects of various aspects may be embodied as a system, method, or computer program product.
  • a storage medium is provided for storing the program product of the above method embodiment.
  • aspects of the invention, or possible implementations of various aspects may be in the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, etc.), or a combination of software and hardware aspects, They are collectively referred to herein as "circuits," “modules,” or “systems.”
  • various aspects of the invention, or possible implementations of various aspects may take the form of a computer program product, which is stored in a computer readable form Computer readable program code in the medium.
  • the computer readable medium can be a computer readable signal medium or a computer readable storage medium.
  • the computer readable storage medium includes, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any suitable combination of the foregoing, such as a random access memory device (RAM), a read only memory device (ROM). , erasable programmable read-only storage devices (EPROM or flash storage devices), optical fibers, portable read-only storage devices (CD-ROM:).
  • the processor in the computer reads the computer readable program code stored in the computer readable medium, such that the processor can perform the functional actions specified in each step or combination of steps in the flowchart; A device that functions as specified in each block, or combination of blocks.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

La présente invention concerne un procédé et un dispositif de mise en cache de données utilisés dans un dispositif de commande. Le dispositif de commande est connecté à un dispositif de stockage; le dispositif de commande comprend un cache de premier niveau; le dispositif de stockage comprend un cache de second niveau et une mémoire; et le cache de second niveau est utilisé pour relayer les données interactives entre le cache de premier niveau et la mémoire. Le procédé consiste à : demander le taux de consultation du cache de second niveau, et s'il est inférieur à un seuil d'expansion de capacité, acquérir un espace de stockage à partir de la mémoire destiné à être utilisé dans le cache de second niveau. La présente invention peut améliorer le taux de consultation du cache de second niveau.
PCT/CN2013/091194 2013-12-31 2013-12-31 Procédé, dispositif et système de mise en cache de données WO2015100653A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201380002567.6A CN103858112A (zh) 2013-12-31 2013-12-31 一种数据缓存方法、装置及系统
PCT/CN2013/091194 WO2015100653A1 (fr) 2013-12-31 2013-12-31 Procédé, dispositif et système de mise en cache de données

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2013/091194 WO2015100653A1 (fr) 2013-12-31 2013-12-31 Procédé, dispositif et système de mise en cache de données

Publications (1)

Publication Number Publication Date
WO2015100653A1 true WO2015100653A1 (fr) 2015-07-09

Family

ID=50864335

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2013/091194 WO2015100653A1 (fr) 2013-12-31 2013-12-31 Procédé, dispositif et système de mise en cache de données

Country Status (2)

Country Link
CN (1) CN103858112A (fr)
WO (1) WO2015100653A1 (fr)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104298471A (zh) * 2014-09-16 2015-01-21 青岛海信信芯科技有限公司 一种高速缓存的数据写入方法及装置
CN105630698A (zh) * 2014-10-28 2016-06-01 华为技术有限公司 配置扩展缓存的方法、装置及扩展缓存
CN105849707B (zh) * 2014-11-28 2019-12-17 华为技术有限公司 一种多级缓存的功耗控制方法、装置及设备
KR20170008339A (ko) * 2015-07-13 2017-01-24 에스케이하이닉스 주식회사 메모리 시스템 및 메모리 시스템의 동작 방법
CN105138292A (zh) * 2015-09-07 2015-12-09 四川神琥科技有限公司 磁盘数据读取方法
CN105183394B (zh) * 2015-09-21 2018-09-04 北京奇虎科技有限公司 一种数据存储处理方法和装置
CN106789431B (zh) 2016-12-26 2019-12-06 中国银联股份有限公司 一种超时监控方法及装置
KR20190023433A (ko) * 2017-08-29 2019-03-08 에스케이하이닉스 주식회사 메모리 시스템 및 그것의 동작 방법
CN108287793B (zh) * 2018-01-09 2020-12-25 网宿科技股份有限公司 响应消息的缓冲方法及服务器
CN112860599B (zh) * 2019-11-28 2024-02-02 中国电信股份有限公司 数据缓存处理方法、装置以及存储介质
CN112199383A (zh) * 2020-10-19 2021-01-08 珠海金山网络游戏科技有限公司 数据更新方法及装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080028150A1 (en) * 2006-07-28 2008-01-31 Farnaz Toussi Autonomic Mode Switching for L2 Cache Speculative Accesses Based on L1 Cache Hit Rate
CN101196852A (zh) * 2008-01-03 2008-06-11 杭州华三通信技术有限公司 分布式缓存方法及其系统、以及缓存设备和非缓存设备
US7519775B2 (en) * 2006-02-23 2009-04-14 Sun Microsystems, Inc. Enforcing memory-reference ordering requirements at the L2 cache level
CN101493795A (zh) * 2008-01-24 2009-07-29 杭州华三通信技术有限公司 存储系统和存储控制器以及存储系统中的缓存实现方法
CN101510176A (zh) * 2009-03-26 2009-08-19 浙江大学 通用操作系统对cpu二级缓存访问的控制方法
CN102231137A (zh) * 2011-05-26 2011-11-02 浪潮(北京)电子信息产业有限公司 一种数据存储系统及方法
CN103383666A (zh) * 2013-07-16 2013-11-06 中国科学院计算技术研究所 改善缓存预取数据局部性的方法和系统及缓存访问方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100505762C (zh) * 2006-04-19 2009-06-24 华中科技大学 适用于对象网络存储的分布式多级缓存系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7519775B2 (en) * 2006-02-23 2009-04-14 Sun Microsystems, Inc. Enforcing memory-reference ordering requirements at the L2 cache level
US20080028150A1 (en) * 2006-07-28 2008-01-31 Farnaz Toussi Autonomic Mode Switching for L2 Cache Speculative Accesses Based on L1 Cache Hit Rate
CN101196852A (zh) * 2008-01-03 2008-06-11 杭州华三通信技术有限公司 分布式缓存方法及其系统、以及缓存设备和非缓存设备
CN101493795A (zh) * 2008-01-24 2009-07-29 杭州华三通信技术有限公司 存储系统和存储控制器以及存储系统中的缓存实现方法
CN101510176A (zh) * 2009-03-26 2009-08-19 浙江大学 通用操作系统对cpu二级缓存访问的控制方法
CN102231137A (zh) * 2011-05-26 2011-11-02 浪潮(北京)电子信息产业有限公司 一种数据存储系统及方法
CN103383666A (zh) * 2013-07-16 2013-11-06 中国科学院计算技术研究所 改善缓存预取数据局部性的方法和系统及缓存访问方法

Also Published As

Publication number Publication date
CN103858112A (zh) 2014-06-11

Similar Documents

Publication Publication Date Title
WO2015100653A1 (fr) Procédé, dispositif et système de mise en cache de données
US9229653B2 (en) Write spike performance enhancement in hybrid storage systems
US9612964B2 (en) Multi-tier file storage management using file access and cache profile information
US7979631B2 (en) Method of prefetching data in hard disk drive, recording medium including program to execute the method, and apparatus to perform the method
US8751725B1 (en) Hybrid storage aggregate
US8392670B2 (en) Performance management of access to flash memory in a storage device
US20130080679A1 (en) System and method for optimizing thermal management for a storage controller cache
WO2017025039A1 (fr) Procédé et dispositif d'accès aux données orientés mémoire flash
US9047068B2 (en) Information handling system storage device management information access
WO2017148242A1 (fr) Procédé d'accès à un disque dur d'enregistrement magnétique à pistes superposées (smr), et serveur
WO2014209234A1 (fr) Procédé et appareil de gestion dynamique optimisée de région de données sensibles
WO2021036689A1 (fr) Procédé et dispositif de gestion d'espace de mémoire cache
KR20130112210A (ko) 메모리 시스템 및 그것의 페이지 교체 방법
WO2023035646A1 (fr) Procédé et appareil d'extension de mémoire et dispositif associé
JP2012533781A (ja) 計算機システム及びその負荷均等化制御方法
US20200133543A1 (en) Locality-aware, memory-efficient, time-efficient hot data identification using count-min-sketch for flash or streaming applications
TWI584120B (zh) 用於動態調適快取的方法及系統
US10282291B2 (en) Storage system with data management mechanism and method of operation thereof
KR20210008826A (ko) 논리 블록 어드레싱 범위 충돌 크롤러
US11474750B2 (en) Storage control apparatus and storage medium
JP6721821B2 (ja) ストレージ制御装置、ストレージ制御方法およびストレージ制御プログラム
US9542326B1 (en) Managing tiering in cache-based systems
US11016692B2 (en) Dynamically switching between memory copy and memory mapping to optimize I/O performance
CN115268763A (zh) 一种缓存管理方法、装置及设备
US20060143378A1 (en) Information processing apparatus and control method for this information processing apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13900664

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13900664

Country of ref document: EP

Kind code of ref document: A1