WO2012089144A1 - 一种缓存分配方法及装置 - Google Patents

一种缓存分配方法及装置 Download PDF

Info

Publication number
WO2012089144A1
WO2012089144A1 PCT/CN2011/084927 CN2011084927W WO2012089144A1 WO 2012089144 A1 WO2012089144 A1 WO 2012089144A1 CN 2011084927 W CN2011084927 W CN 2011084927W WO 2012089144 A1 WO2012089144 A1 WO 2012089144A1
Authority
WO
WIPO (PCT)
Prior art keywords
resource pool
virtual sub
capacity
data
cache
Prior art date
Application number
PCT/CN2011/084927
Other languages
English (en)
French (fr)
Inventor
肖飞
林宇
Original Assignee
成都市华为赛门铁克科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 成都市华为赛门铁克科技有限公司 filed Critical 成都市华为赛门铁克科技有限公司
Publication of WO2012089144A1 publication Critical patent/WO2012089144A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches

Definitions

  • the present invention relates to the field of communications, and in particular, to a buffer allocation method and apparatus.
  • SSDs Solid state drives
  • Solid State Disks or Solid State Drives also known as electronic hard drives or solid state electronic drives
  • solid state drives do not have the rotating medium of ordinary hard drives, they are extremely shock resistant and have a wide operating temperature range.
  • -40 ° C ⁇ 85 ° C currently widely used in military, automotive, industrial control, video surveillance, network monitoring, network terminals, power, medical, aviation, navigation equipment, etc.
  • SSD Cache is to apply SSD to the storage system
  • a new type of application, belonging to the second-level cache it mainly uses SSD read and write response is short, especially the read response time is very short, the hotspot data is stored in the SSD, when accessing the data, it can be from the SSD instead of from the SSD
  • the traditional disk is read, which can greatly improve the performance of the system.
  • 1-4 SSD disks form the SSD Cache resource pool. SSD disks can only be used by one end controller of the storage system. When one end of the storage system controller fails. After that, the hotspot data stored therein is lost, which
  • an SSD Cache resource pool composed of SSD disks can be used by controllers at both ends of the storage system. Even if one controller of the system fails, the other controller takes over its services without affecting the overall performance of the system. .
  • the embodiment of the invention provides a buffer allocation method and device, which can prevent two or more LUNs from simultaneously accessing data in the cache resource pool, thereby avoiding the control between the controllers at both ends. Miscellaneous communication negotiation process to ensure data security.
  • the pool is pre-divided into a virtual sub-resource pool equal to the number of the logical units, each virtual sub-resource pool corresponding to a different logical unit, and the cache resource included in each virtual sub-resource pool stores the service data corresponding to the logical unit.
  • a determining unit configured to determine a logical unit that the acquired service data needs to be stored; a searching unit, configured to search a virtual sub-resource pool corresponding to the logical unit; and a storage unit, configured to store the service data in the searched a cache resource included in the virtual sub-resource pool; a partitioning unit, configured to pre-divide the cache resource pool into a virtual sub-resource pool equal to the number of the logical units.
  • the cache resource pool is divided into an equal number of virtual sub-resource pools according to the number of LUNs, and each virtual sub-resource pool corresponds to a different logical unit LUN.
  • Each virtual sub-resource pool can only access the cache data for the corresponding LUNs. Therefore, two or more LUNs from the controllers at both ends can access the same cache data at the same time.
  • the resource communication data is cached to perform a complex communication negotiation process to ensure data security.
  • FIG. 1 is a schematic diagram of an embodiment of a cache allocation method according to an embodiment of the present invention.
  • FIG. 2 is a schematic structural diagram of a cache system in a cache allocation process according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram of another embodiment of a cache allocation method according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of an embodiment of a cache allocating device according to an embodiment of the present invention.
  • the embodiment of the invention provides a cache allocation method and device, which can use the cache resource pool for equalization of the controllers at both ends of the storage system. Even if one end controller of the system fails, the other controller takes over its service. Double control to improve the overall performance of the system, the following detailed description.
  • an embodiment of a cache allocation method in an embodiment of the present invention includes: 101. Determine a logical unit that needs to be stored in the obtained service data;
  • Each type of service data needs to be stored in the logical unit LUN of the system.
  • the LUN is unique.
  • the service data types of different LUNs may be the same.
  • the LUN corresponding to the obtained service data is determined.
  • the SSD Cache cache resource is pre-divided into a virtual sub-resource pool equal to the number of logical units, and each virtual sub-resource pool corresponds to a different logical unit, that is, the data of each virtual sub-resource pool is only used.
  • a LUN is accessed, and the cache resource included in each virtual sub-resource pool stores the service data of the logical unit.
  • Each LUN accesses data in its corresponding virtual sub-resource pool independently of other LUNs, but each The data of the virtual sub-resource pool has an opportunity to be accessed by any LUN.
  • each SSD Cache virtual sub-resource pool may be the same or different, but each virtual sub-resource pool can only use the allocated capacity.
  • step 103 Store the service data in a cache resource included in the found virtual sub-resource pool. After the virtual sub-resource pool corresponding to the LUN is found in step 102, the service data is also correspondingly stored in the cache resources included in different virtual sub-resource pools belonging to different LUNs.
  • the logical unit to be stored in the service data is determined, the virtual sub-resource pool corresponding to the logical unit is searched, and the service data is stored therein, because the SSD Cache cache resource is pre-divided into virtual sub-resources equal in number to the logical unit.
  • each virtual sub-resource pool corresponds to a different logical unit. Therefore, multiple LUNs from the controllers at both ends are prevented from simultaneously accessing the same cache data, thereby avoiding control of accessing the same cache resource data between the controllers at both ends. Conduct a complex communication negotiation process.
  • the cache system has two controllers, 201 is the first controller, 202 is the second controller, and 203 is the service layer of the cache system.
  • LUN0, LUN1, and LUN2 are services of the service layer, where LUN0 and LUN1 are controlled by the first controller 201, LUN2 is controlled by the second controller 202, and 204 is the resource layer of the cache system, where 208 is the SSD cache resource layer.
  • the SSD Cache resource pool is divided into different SSD Cache virtual sub-resource pools, which are specifically divided into the first virtual sub-resource pool 205, and the second virtual sub-investment is formed by the number of different LUN services.
  • the three virtual sub-resource pool 207 corresponds to LUN 2.
  • the cache allocation method in the embodiment of the present invention is described in detail in another embodiment. Referring to FIG. 3, another embodiment of the cache allocation method in the embodiment of the present invention includes:
  • the adjustment thread may be set, and a preset duration is set in advance, and each preset is reached.
  • the duration of the acquisition obtains the access heat value of the data stored in the virtual sub-resource pool divided in step 101.
  • the setting of the duration is related to the actual application process, and the specific value of the duration is not limited herein.
  • the number of hotspot data stored in the resource pool can be counted by a counter in the system, which is well known to those skilled in the art, and details are not described herein.
  • Adjust the capacity of the virtual sub-resource pool to a capacity that matches the current stored data access heat value.
  • adjust the capacity of the virtual sub-resource pool to The current storage data access heat value matches the capacity.
  • the access heat value can reflect the frequency of accessing the stored data to a certain extent. Generally, the more frequently the access data is, the more cache resources are needed. Therefore, the access to the heat value corresponds to the virtual sub-resource pool, and the access capacity is also large. If the value of the heat is small, the capacity of the corresponding virtual sub-resource pool is also small.
  • the specific setting process is related to the actual application process, which is not limited herein.
  • the correspondence between the access heat value of the data stored in the virtual sub-resource pool and the capacity of the virtual sub-resource pool may be preset. In actual applications, the correspondence relationship is generally not set to a correspondence between specific values. Is set to the corresponding value of the two ranges, for example, when the access heat value is 50 ⁇ 100, the corresponding virtual sub-resource pool has a capacity of 30 megabytes to 60 megabytes, then if a virtual sub-resource pool The capacity of the current storage data access is 60, and the capacity of the virtual sub-resource pool at this time is matched with the current stored data access heat value.
  • the capacity of a virtual sub-resource pool is higher than the capacity matching the current stored data access hot value, and then the storage data is accessed.
  • the heat value angle analysis indicates that the current storage data access heat value is low, the access data frequency is low, the current virtual sub-resource pool capacity is too large, and the stored data access heat value does not match.
  • the virtual sub-sub-division is reduced.
  • the capacity of the resource pool, the spare capacity due to the adjustment can be used by other virtual sub-resource pools that need to increase capacity.
  • the capacity of a virtual sub-resource pool is lower than the capacity of the current stored data access hot value, and then the storage data access heat value is used.
  • the current storage data access heat value is high, the access data frequency is high, the current virtual sub-resource pool capacity is too small, and may not provide sufficient capacity to meet the storage data requirement, then increase the capacity of the virtual sub-resource pool. Draw from the capacity that is reduced when the capacity of a virtual sub-resource pool is high.
  • each virtual sub-resource pool can only use the cache resources allocated to itself, if the current capacity allocated to the virtual sub-resource pool is lower than the capacity matching the current stored data access popularity value, the virtual sub-resource pool is not available. When the capacity of the business data storage is obtained, more free capacity can be obtained by deleting the non-hotspot data in the virtual sub-resource pool.
  • the access frequency of the data in the virtual sub-resource pool may be sorted, and the ordering manner may be high to low, or may be low to high, and then the access frequency is deleted. Sorting the last one or more data, the number of specifically deleted data is related to the actual application process, and is not specifically limited herein.
  • the frequency of accessing the data in the virtual sub-resource pool can be counted by the counters in the system, which is well known to those skilled in the art, and details are not described herein.
  • the following initial configuration of the cache resource pool is required:
  • the SSD Cache resource pool is divided into multiple data blocks according to the type of the service, and the service corresponding to the LUN has multiple types, and different types of service data are used.
  • the required cache capacity is different.
  • the hotspot data generated by the service requested by the webpage requires a small cache capacity
  • the hotspot data generated by the video request or the data block service requires a large cache capacity, correspondingly, according to
  • the capacity of the service data is divided into a cache resource pool, or the capacity of the virtual sub-resource pool corresponding to the logical unit is adjusted, so that the storage capacity of the service with large cache capacity is large, and the storage data needs to have a small cache capacity.
  • the allocated cache size is small.
  • the capacity of the virtual sub-resource pool is adjusted to match the current stored data access hot value according to the matching relationship between the preset access heat value and the virtual sub-resource pool capacity.
  • the capacity of the virtual sub-resource pool is dynamically adjusted, so that the capacity allocation of the virtual sub-resource pool is more suitable for practical applications, and the application of the cache resource pool is more reasonable.
  • an embodiment of the cache allocation apparatus in the embodiment of the present invention includes: a determining unit 401, configured to determine a logical unit that needs to be stored in the acquired service data, Determining the type of business data corresponding to the logical unit;
  • the searching unit 402 is configured to search for a virtual sub-resource pool corresponding to the logical unit;
  • a storage unit 403, configured to store the service data in a cache resource included in the found virtual sub-resource pool
  • the dividing unit 404 is configured to buffer the resource pool into a virtual sub-resource pool equal to the number of logical units.
  • the cache allocation apparatus in this embodiment may further include: an obtaining unit 405, configured to acquire an access heat value of the data stored in the divided virtual sub-resource pool when the preset duration is reached;
  • the adjusting unit 406 is configured to adjust the capacity of the virtual sub-resource pool to a capacity that matches the current stored data access heat value according to the matching relationship between the preset access heat value and the virtual sub-resource pool capacity, and is further used according to the service data.
  • the type adjusts the virtual sub-resource pool capacity corresponding to the logical unit;
  • the sorting unit 407 is configured to sort the access frequency of the data in the virtual sub-resource pool when the virtual sub-resource pool has no capacity for the service data storage;
  • the deleting unit 408 is configured to delete the data with the lowest access frequency.
  • the adjusting unit 406 in this embodiment may further include: a first adjusting unit 4061, configured to reduce the capacity of the virtual sub-resource pool if the capacity of the virtual sub-resource pool is higher than a current storage data access heat value The capacity of the virtual sub-resource pool;
  • the second adjusting unit 4062 is configured to increase the capacity of the virtual sub-resource pool if the capacity of the virtual sub-resource pool is lower than the current storage data access heat value, and the increased capacity of the virtual sub-resource pool is not greater than the reduction. The capacity of the virtual sub-resource pool.
  • the determining unit 401 determines the service data type corresponding to the logical unit, and the determining unit 401 determines the logical unit LUN that the acquired service data needs to be stored, and the searching unit 402 searches for the virtual sub-resource pool corresponding to the logical unit.
  • the SSD Cache cache resource is pre-divided into a virtual sub-resource pool equal to the number of logical units, and each virtual sub-resource pool corresponds to a different logical unit, that is, data of each virtual sub-resource pool. Only one LUN is accessed, and the cache resource included in each virtual sub-resource pool stores the service data of the logical unit. Each LUN accesses data in its corresponding virtual sub-resource pool independently of other LUNs, but The data of each virtual sub-resource pool has an opportunity to be accessed by any LUN.
  • the storage unit 403 stores the service data in the cached resource included in the virtual sub-resource pool.
  • For the storage process refer to the related content described in step 103 in the foregoing embodiment of the present invention, and details are not described herein.
  • the obtaining unit 405 obtains the access heat value of the data stored in the divided virtual sub-resource pool.
  • the adjustment thread may be set, and a certain duration is set in advance, and each time the pre-set is reached.
  • the set duration is obtained by the access heat value of the data stored in the divided virtual sub-resource pool.
  • the setting of the duration is related to the actual application process, and the specific value of the duration is not limited herein.
  • Rate and the number of hotspot data stored in the virtual sub-resource pool The higher the access frequency, the hotspot data. The more the access heat value is, the more the access frequency of the data stored in the virtual sub-resource pool and the hot-spot data stored in the virtual sub-resource pool can be counted by the internal counters of the system, which is known to those skilled in the art. I won't go into details here.
  • the adjusting unit 406 adjusts the capacity of the virtual sub-resource pool to a capacity that matches the current stored data access heat value according to the matching relationship between the preset access heat value and the virtual sub-resource pool capacity, where the capacity of the virtual sub-resource pool.
  • the first adjusting unit 4061 reduces the capacity of the virtual sub-resource pool if the capacity of the virtual sub-resource pool is lower than the capacity of the current stored data access hot value.
  • the second adjustment unit 4062 increases the capacity of the virtual sub-resource pool, and the capacity of the virtual sub-resource pool is not greater than the capacity of the virtual sub-resource pool.
  • each virtual sub-resource pool can only use the cache resources allocated to itself, if the current capacity allocated to the virtual sub-resource pool is lower than the capacity matching the current stored data access popularity value, the virtual sub-resource pool is not available.
  • the deleting unit 408 deletes the non-hotspot data in the virtual sub-resource pool to obtain more vacant capacity.
  • the sorting unit 407 sorts and sorts the access frequency of the data in the virtual sub-resource pool. The method may be high-to-low or low-to-high, and then the deletion unit 408 deletes one or more data at the end of the access frequency sorting.
  • the number of the specifically deleted data is related to the actual application process, and is not specifically limited herein.
  • the frequency of accessing the data in the virtual sub-resource pool can be counted by the counter in the system, which is known to those skilled in the art, and details are not described herein.
  • the SSD Cache cache resource is pre-divided into a virtual sub-resource pool equal to the number of logical units, and each virtual sub-resource pool corresponds to a different logical unit, that is, each virtual sub-resource pool.
  • the data is only accessible by one LUN, and the cache resources included in each virtual sub-resource pool store the service data of the logical unit. Since each virtual sub-resource pool corresponds to a different logical unit, the controllers from both ends are avoided.
  • the multiple LUNs access the same cache data at the same time, so as to avoid the complicated communication negotiation process between the two controllers for accessing the same cache resource data, according to the matching relationship between the preset access heat value and the virtual sub-resource pool capacity.
  • the adjusting unit 406 adjusts the capacity of the virtual sub-resource pool to a capacity that matches the current stored data access hot value, and dynamically adjusts each virtual sub-resource.
  • the capacity of the pool makes the capacity allocation of the virtual sub-resource pool more suitable for practical applications.
  • the cache resource in the foregoing embodiment is an SSD Cache resource.
  • the SSD Cache resource can be used in other types of storage resources.
  • the specific type of the cache resource application is related to the actual application process, which is not limited herein.
  • a person skilled in the art can understand that all or part of the steps of implementing the above embodiments can be completed by a program to instruct related hardware, and the program can be stored in a computer readable storage medium, the above mentioned storage.
  • the medium can be a read only memory, a magnetic disk or a compact disk or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Description

一种緩存分配方法及装置 本申请要求于 2010 年 12 月 30 日提交中国专利局、 申请号为 201010616145.6、发明名称为"一种緩存分配方法及装置"的中国专利申请的 优先权, 其全部内容通过引用结合在本申请中。
技术领域
本发明涉及通信领域, 特别涉及一种緩存分配方法及装置。
背景技术
固态硬盘 ( SSD, Solid State Disk或 Solid State Drive ), 也称作电子硬 盘或者固态电子盘, 由于固态硬盘没有普通硬盘的旋转介质, 因而抗震性 极佳, 且其芯片的工作温度范围很宽(-40°C〜85 °C ), 目前广泛应用于军事、 车载、 工控、 视频监控、 网络监控、 网络终端、 电力、 医疗、 航空等、 导 航设备等领域, SSD Cache是将 SSD运用到存储系统中一种新型应用, 属 于二级緩存, 它主要利用 SSD读写响应较短, 尤其是读响应时间很短, 将 热点数据存储在 SSD中, 当访问这些数据时, 可以从 SSD中而不是从传统 磁盘中读取,这样可以大大提高系统的性能, 1-4个 SSD盘片组成 SSD Cache 资源池, SSD盘片一般只能供存储系统的一端控制器使用, 当存储系统的 一端控制器失效后, 存储在其中的热点数据丟失, 从而影响系统的整体性 能。
现有技术中, SSD盘片组成的 SSD Cache资源池可以供存储系统的两 端控制器使用, 即使系统的一端控制器失效, 另一个控制器也会接管其业 务, 不会影响系统的整体性能。
但是在上述现有技术中, 如果出现分属于两端控制器的两个或两个以 上的逻辑单元( LUN , Logical Unit Number )需要同时访问一处緩存资源 中数据的问题, 那么两端控制器之间便需要对因存储数据及读取数据产生 的冲突进行通信协商, 协商过程比较复杂, 可能会出现异常造成数据丟失 等严重后果。
发明内容
本发明实施例提供了一种緩存分配方法及装置, 可以避免两个及两个 以上 LUN同时访问緩存资源池中的数据, 从而避免控制两端控制器之间复 杂的通信协商过程, 保障数据的安全。
本发明实施例提供的一种緩存分配方法包括:
确定获取到的业务数据需要存储的逻辑单元; 查找与所述逻辑单元对 应的虚拟子资源池; 将所述业务数据存储于查找到的虚拟子资源池所包括 的緩存资源中; 其中, 緩存资源池被预先划分为与所述逻辑单元数量相等 的虚拟子资源池, 每个虚拟子资源池对应一个不同的逻辑单元, 且每个虚 拟子资源池所包括的緩存资源存储对应逻辑单元的业务数据。
本发明实施例提供的一种緩存分配装置包括:
确定单元, 用于确定获取到的业务数据需要存储的逻辑单元; 查找单 元, 用于查找与所述逻辑单元对应的虚拟子资源池; 存储单元, 用于将所 述业务数据存储于查找到的虚拟子资源池所包括的緩存资源中; 划分单元, 用于緩存资源池被预先划分为与所述逻辑单元数量相等的虚拟子资源池。
从以上技术方案可以看出, 本发明实施例具有以下优点: 根据 LUN的 数量将緩存资源池划分为数量相等的虚拟子资源池, 每个虚拟子资源池与 不同的逻辑单元 LUN——对应, 每个虚拟子资源池只供各自对应的 LUN 访问緩存数据, 因此避免分别来自两端控制器的两个及两个以上 LUN同时 访问同一緩存数据, 从而避免控制两端控制器之间为访问同一緩存资源数 据而进行复杂的通信协商过程, 保障数据的安全。
附图说明
图 1为本发明实施例中緩存分配方法的一个实施例示意图;
图 2为本发明实施例緩存分配过程中的緩存系统结构示意图;
图 3为本发明实施例中緩存分配方法的另一个实施例示意图;
图 4为本发明实施例中緩存分配装置的一个实施例示意图。
具体实施方式
本发明实施例提供了一种緩存分配方法及装置, 可以将緩存资源池供 存储系统的两端控制器均衡使用, 即使系统的一端控制器失效, 另一个控 制器也会接管其业务, 因实现双控而提高系统整体性能, 下面分别进行详 细说明。
请参阅图 1 , 本发明实施例中緩存分配方法的一个实施例包括: 101、 确定获取到的业务数据需要存储的逻辑单元;
各类型业务数据均需要存储到系统的逻辑单元 LUN中, LUN是唯一 的, 不同 LUN的业务数据类型可能会是相同的。
本发明实施例中, 首先要确定获取得到的业务数据对应的 LUN。
102、 查找与逻辑单元对应的虚拟子资源池;
本发明实施例中, SSD Cache緩存资源被预先划分为与逻辑单元数量 相等的虚拟子资源池, 每个虚拟子资源池与一个不同的逻辑单元对应, 即 每个虚拟子资源池的数据仅供一个 LUN访问, 而且每个虚拟子资源池所 包括的緩存资源存储对应逻辑单元的业务数据, 每个 LUN在其对应的虚 拟子资源池中对数据的访问均独立于其他 LUN进行, 但每个虚拟子资源 池的数据有机会供任意 LUN访问。
需要说明的是, 各 SSD Cache虚拟子资源池的初始容量可以相同, 也 可以不同, 但各虚拟子资源池只能使用划分的容量。
103、 将业务数据存储于查找到的虚拟子资源池所包括的緩存资源中。 根据在步骤 102中查找到与 LUN对应的虚拟子资源池后, 将业务数 据也对应的存储于分属不同 LUN 的不同虚拟子资源池所包括的緩存资源 中。
本发明实施例中, 确定业务数据所要存储的逻辑单元, 查找逻辑单元 对应的虚拟子资源池并将该业务数据存储其中, 由于 SSD Cache緩存资源 被预先划分为与逻辑单元数量相等的虚拟子资源池, 每个虚拟子资源池与 一个不同的逻辑单元对应, 因此避免分别来自两端控制器的多个 LUN 同 时访问同一緩存数据, 从而避免控制两端控制器之间为访问同一緩存资源 数据而进行复杂的通信协商过程。
本发明实施例中,緩存分配过程中的緩存系统结构示意图请参阅图 2, 緩存系统有两端控制器, 201为第一控制器, 202为第二控制器, 203是緩 存系统的业务层, LUN0、 LUN1及 LUN2均为业务层的业务, 其中 LUN0 与 LUN1由第一控制器 201控制, LUN2由第二控制器 202控制, 204是 緩存系统的资源层, 其中, 208为固态硬盘緩存资源层, 由固态硬盘组成, 根据不同的 LUN业务的数量,将 SSD Cache资源池划分成不同数量的 SSD Cache虚拟子资源池, 具体划分为第一虚拟子资源池 205, 第二虚拟子资 源池 206, 第三虚拟子资源池 207, 该各虚拟子资源池对应各自的 LUN业 务,如图所示, 第一虚拟子资源池 205对应 LUN0, 第二虚拟子资源池 206 对应 LUN1 , 第三虚拟子资源池 207对应 LUN2。 为了便于理解, 下面以另一实施例对本发明实施例中的緩存分配方法 进行详细描述, 请参阅图 3 , 本发明实施例中的緩存分配方法的另一实施 例包括:
301〜303、 本发明实施中步骤 301至 303的内容, 请参见前述图 1所 述实施例中步骤 101至 103所描述的内容, 此处不再贅述。
304、 当到达预置时长时, 获取所划分的虚拟子资源池中所存储数据 的访问热度值; 在 SSD Cache资源池系统内,可设置调整线程,预先设置一定的时长, 每到达该预置的时长则获取步骤 101中所划分的虚拟子资源池中存储的数 据的访问热度值, 该时长的设定与实际应用过程相关, 时长的具体数值此 处不作限定。
率及虚拟子资源池中存储的热点数据数量, 访问频率越高, 热点数据数量 越多, 表示访问热度值越高。
资源池中存储的热点数据数量均可由系统内部的计数器进行计数, 具体为 本领域技术人员公知技术, 此处不再贅述。
305、 将虚拟子资源池的容量调整至与当前存储数据访问热度值相匹 配的容量; 根据预置的访问热度值与虚拟子资源池容量的匹配关系, 将虚拟子资 源池的容量调整至与当前存储数据访问热度值相匹配的容量。 访问热度值可以在一定程度上反映访问存储数据的频繁程度, 一般访 问数据越频繁则需要的緩存资源越多, 因此可以设置为, 访问热度值则对 应的虚拟子资源池的容量也大, 访问热度值小则对应的虚拟子资源池的容 量也小, 具体设置过程与实际应用过程相关, 此处不作限定。 在系统内, 可预先设置虚拟子资源池中所存储数据的访问热度值与虚 拟子资源池的容量的对应关系, 在实际应用中, 该对应关系一般不设置为 具体数值之间的对应, 而是设置为两个范围内数值的对应, 例如, 当访问 热度值为 50〜100时, 其对应的虚拟子资源池的容量在 30兆字节至 60兆 字节, 那么如果某虚拟子资源池的容量为 40 兆字节, 当前存储数据访问 热度值为 60,则可确定此时的虚拟子资源池的容量与当前存储数据访问热 度值向匹配。
具体的, 若根据预置的访问热度值与虚拟子资源池的容量的匹配关 系, 可知某个虚拟子资源池的容量高于与当前存储数据访问热度值相匹配 的容量, 那么从存储数据访问热度值角度分析, 则表明当前存储数据访问 热度值较低, 访问数据频率低, 当前的虚拟子资源池容量偏大, 与存储数 据访问热度值不匹配, 为节省緩存资源, 则减少该虚拟子资源池的容量, 因调整而空余出来的容量可供其他需要增加容量的虚拟子资源池使用。
若根据预置的访问热度值与虚拟子资源池的容量的匹配关系, 可知某 个虚拟子资源池的容量低于与当前存储数据访问热度值相匹配的容量, 那 么从存储数据访问热度值角度分析, 则表明当前存储数据访问热度值高, 访问数据频率高, 当前的虚拟子资源池容量偏小, 可能无法提供足够的容 量满足存储数据的需求, 则增加该虚拟子资源池的容量, 可从当某个虚拟 子资源池的容量偏高而被减少的容量中划取。
306、 删除虚拟子资源池中的非热点数据。
由于各虚拟子资源池只能使用划分给自身的緩存资源, 那么, 若当前 划分给虚拟子资源池的容量低于与当前存储数据访问热度值相匹配的容 量时, 虚拟子资源池没有可供业务数据存储的容量时, 可通过删除该虚拟 子资源池中的非热点数据获取更多空余容量。
具体的, 当虚拟子资源池没有可供业务数据存储的容量时, 可将该虚 拟子资源池中数据的访问频率进行排序, 排序方式可由高到低, 也可由低 到高, 然后删除访问频率排序最末尾的一个或多个数据, 具体删除的数据 数量, 与实际应用过程相关, 此处不作具体限定。
需要说明的是, 虚拟子资源池中数据的访问频率可由系统内计数器进 行计数, 具体为本领域技术人员公知技术, 此处不再贅述。 本发明实施例中, 还需要对緩存资源池进行以下的初始化配置: 将 SSD Cache资源池根据业务的类型划分成多个数据块, 由于 LUN 对应的业务有多种类型, 而不同类型的业务数据所需要的緩存容量不同, 一般来说, 来自网页请求的业务所产生的热点数据需要的緩存容量小, 而 来自视频请求或者数据块业务所产生的热点数据需要的緩存容量大, 相应 的, 根据业务数据所需容量大小划分緩存资源池, 或对逻辑单元所对应的 虚拟子资源池容量进行调整, 使得存储数据需要緩存容量大的业务分配到 的緩存容量大, 存储数据需要緩存容量小的业务分配到的緩存容量小。
本发明实施例中, 当到达一定的预置时长时, 根据预置的访问热度值 与虚拟子资源池容量的匹配关系, 将虚拟子资源池的容量调整至与当前存 储数据访问热度值相匹配的容量, 不断动态调整各虚拟子资源池的容量, 使得对虚拟子资源池的容量分配更符合实际应用, 对緩存资源池的应用更 合理。
本发明实施例还提供了一种緩存分配装置, 请参阅图 4, 本发明实施 例中緩存分配装置的一个实施例包括: 确定单元 401 , 用于确定获取到的业务数据需要存储的逻辑单元, 确 定逻辑单元对应的业务数据类型;
查找单元 402, 用于查找与逻辑单元对应的虚拟子资源池;
存储单元 403 , 用于将业务数据存储于查找到的虚拟子资源池所包括 的緩存资源中;
划分单元 404, 用于緩存资源池被预先划分为与逻辑单元数量相等的 虚拟子资源池。
本实施例中的緩存分配装置还可以进一步包括: 获取单元 405 , 用于当到达预置时长时, 获取所划分的虚拟子资源池 中所存储数据的访问热度值;
调整单元 406, 用于根据预置的访问热度值与虚拟子资源池容量的匹 配关系, 将虚拟子资源池的容量调整至与当前存储数据访问热度值相匹配 的容量, 还用于根据业务数据类型对逻辑单元所对应的虚拟子资源池容量 进行调整; 排序单元 407 ,用于当虚拟子资源池没有可供业务数据存储的容量时 , 将虚拟子资源池中数据的访问频率进行排序;
删除单元 408 , 用于删除访问频率最低的数据。
需要说明的是, 本实施例中的调整单元 406还可以进一步包括: 第一调整单元 4061 ,用于若虚拟子资源池的容量高于与当前存储数据 访问热度值相匹配的容量, 则减少该虚拟子资源池的容量;
第二调整单元 4062,用于若虚拟子资源池的容量低于与当前存储数据 访问热度值相匹配的容量, 则增加该虚拟子资源池的容量, 增加的虚拟子 资源池的容量不大于减少的虚拟子资源池的容量。
为了便于理解, 下面以一具体应用场景对本实施例中緩存分配装置的 各单元之间的联系进行说明。
本发明实施例中, 确定单元 401确定逻辑单元对应的业务数据类型, 且确定单元 401确定获取到的业务数据需要存储的逻辑单元 LUN,查找单 元 402查找与逻辑单元对应的虚拟子资源池。
需要说明的是, SSD Cache緩存资源被划分单元 404预先划分为与逻 辑单元数量相等的虚拟子资源池, 每个虚拟子资源池与一个不同的逻辑单 元对应, 即每个虚拟子资源池的数据仅供一个 LUN访问, 而且每个虚拟 子资源池所包括的緩存资源存储对应逻辑单元的业务数据, 每个 LUN在 其对应的虚拟子资源池中对数据的访问均独立于其他 LUN进行, 但每个 虚拟子资源池的数据有机会供任意 LUN访问。
存储单元 403将业务数据存储于查找到的虚拟子资源池所包括的緩存 资源中, 存储过程可参见前述图 1所示实施例中步骤 103所描述的相关内 容, 此处不再贅述。
当到达预置时长时, 获取单元 405获取所划分的虚拟子资源池中所存 储数据的访问热度值, 在 SSD Cache资源池系统内, 可设置调整线程, 预 先设置一定的时长, 每到达该预置的时长则获取所划分的虚拟子资源池中 存储的数据的访问热度值, 该时长的设定与实际应用过程相关, 时长的具 体数值此处不作限定。
率及虚拟子资源池中存储的热点数据数量, 访问频率越高, 热点数据数量 越多, 表示访问热度值越高, 虚拟子资源池中存储数据的访问频率及虚拟 子资源池中存储的热点数据数量均可由系统内部的计数器进行计数, 具体 为本领域技术人员公知技术, 此处不再贅述。 根据预置的访问热度值与虚拟子资源池容量的匹配关系, 调整单元 406将虚拟子资源池的容量调整至与当前存储数据访问热度值相匹配的容 量, 其中, 若虚拟子资源池的容量高于与当前存储数据访问热度值相匹配 的容量, 则第一调整单元 4061 减少该虚拟子资源池的容量, 若虚拟子资 源池的容量低于与当前存储数据访问热度值相匹配的容量, 则第二调整单 元 4062增加该虚拟子资源池的容量, 增加的虚拟子资源池的容量不大于 减少的虚拟子资源池的容量, 具体调整过程请参见前述图 3所示实施例中 步骤 305所描述的内容, 此处不再贅述。
由于各虚拟子资源池只能使用划分给自身的緩存资源, 那么, 若当前 划分给虚拟子资源池的容量低于与当前存储数据访问热度值相匹配的容 量时, 虚拟子资源池没有可供业务数据存储的容量时, 可由删除单元 408 删除该虚拟子资源池中的非热点数据获取更多空余容量, 具体的, 由排序 单元 407将该虚拟子资源池中数据的访问频率进行排序, 排序方式可由高 到低, 也可由低到高, 然后由删除单元 408删除访问频率排序最末尾的一 个或多个数据, 具体删除的数据数量, 与实际应用过程相关, 此处不作具 体限定。
需要说明的是, 虚拟子资源池中数据的访问频率可由系统内计数器进 行计数, 具体为本领域技术人员公知技术, 此处不再贅述。
本发明实施例中, SSD Cache緩存资源被划分单元 404预先划分为与 逻辑单元数量相等的虚拟子资源池, 每个虚拟子资源池与一个不同的逻辑 单元对应, 即每个虚拟子资源池的数据仅供一个 LUN访问, 而且每个虚 拟子资源池所包括的緩存资源存储对应逻辑单元的业务数据, 由于每个虚 拟子资源池与一个不同的逻辑单元对应, 因此避免分别来自两端控制器的 多个 LUN 同时访问同一緩存数据, 从而避免控制两端控制器之间为访问 同一緩存资源数据而进行复杂的通信协商过程, 根据预置的访问热度值与 虚拟子资源池容量的匹配关系, 调整单元 406将虚拟子资源池的容量调整 至与当前存储数据访问热度值相匹配的容量, 不断动态调整各虚拟子资源 池的容量, 使得对虚拟子资源池的容量分配更符合实际应用。
上述实施例中的緩存资源以 SSD Cache资源为例, 可以理解的是, 也 可应用在其他相同类型的存储资源, 緩存资源应用的具体类型与实际应用 过程相关, 此处不作限定。 本领域普通技术人员可以理解实现上述实施例方法中的全部或部分 步骤是可以通过程序来指令相关的硬件完成, 所述的程序可以存储于一种 计算机可读存储介质中, 上述提到的存储介质可以是只读存储器, 磁盘或 光盘等。
以上对本发明所提供的一种分配緩存方法及装置进行了详细介绍, 对 于本领域的一般技术人员, 依据本发明实施例的思想, 在具体实施方式及 应用范围上均会有改变之处, 综上所述, 本说明书内容不应理解为对本发 明的限制。

Claims

权 利 要 求
1、 一种緩存分配方法, 其特征在于, 包括:
确定获取到的业务数据需要存储的逻辑单元;
查找与所述逻辑单元对应的虚拟子资源池;
将所述业务数据存储于查找到的虚拟子资源池所包括的緩存资源中; 其中, 緩存资源池被预先划分为与所述逻辑单元数量相等的虚拟子资 源池, 每个虚拟子资源池对应一个不同的逻辑单元, 且每个虚拟子资源池 所包括的緩存资源存储对应逻辑单元的业务数据。
2、 根据权利要 1所述的方法, 其特征在于, 还包括: 热度值;
根据预置的访问热度值与虚拟子资源池容量的匹配关系, 将虚拟子资 源池的容量调整至与当前存储数据访问热度值相匹配的容量。
3、 根据权利要求 1所述的方法, 其特征在于, 所述方法还包括: 当虚拟子资源池没有可供业务数据存储的容量时, 将所述虚拟子资源 池中数据的访问频率进行排序;
删除所述访问频率排序中末尾一位或多位数据。
4、根据权利要求 1至 3中任意一项所述的方法, 其特征在于, 还包括: 确定逻辑单元对应的业务数据类型;
根据业务数据类型对逻辑单元所对应的虚拟子资源池容量进行调整。
5、 根据权利要求 3中任意一项所述的方法, 其特征在于, 所述根据预 置的访问热度值与虚拟子资源池容量的匹配关系, 将虚拟子资源池的容量 调整至与当前存储数据访问热度值相匹配的容量包括:
若所述虚拟子资源池的容量高于与当前存储数据访问热度值相匹配的 容量, 则减少所述虚拟子资源池的容量;
若所述虚拟子资源池的容量低于与当前存储数据访问热度值相匹配的 容量, 则增加所述虚拟子资源池的容量, 所述增加的虚拟子资源池的容量 不大于所述减少的虚拟子资源池的容量。
6、 一种緩存分配装置, 其特征在于, 包括:
确定单元, 用于确定获取到的业务数据需要存储的逻辑单元; 查找单元, 用于查找与所述逻辑单元对应的虚拟子资源池;
存储单元, 用于将所述业务数据存储于查找到的虚拟子资源池所包括 的緩存资源中;
划分单元, 用于緩存资源池被预先划分为与所述逻辑单元数量相等的 虚拟子资源池。
7、 根据权利要求 6所述的装置, 其特征在于, 所述装置还包括: 获取单元, 用于当到达预置时长时, 获取所划分的虚拟子资源池中所 存储数据的访问热度值;
调整单元, 用于根据预置的访问热度值与虚拟子资源池容量的匹配关 系, 将虚拟子资源池的容量调整至与当前存储数据访问热度值相匹配的容 量。
8、 根据权利要求 6所述的装置, 其特征在于, 所述装置还包括: 排序单元, 用于当虚拟子资源池没有可供业务数据存储的容量时, 将 所述虚拟子资源池中数据的访问频率进行排序;
删除单元, 用于删除所述访问频率排序中末尾一位或多位数据。
9、 根据权利要求 7所述的装置, 其特征在于,
所述确定单元, 还用于确定逻辑单元对应的业务数据类型;
所述调整单元, 还用于根据业务数据类型对逻辑单元所对应的虚拟子 资源池容量进行调整。
10、 根据权利要求 7或 9任意一项所述的装置, 其特征在于, 所述调 整单元包括:
第一调整单元, 用于若所述虚拟子资源池的容量高于与当前存储数据 访问热度值相匹配的容量, 则减少所述虚拟子资源池的容量;
第二调整单元, 用于若所述虚拟子资源池的容量低于与当前存储数据 访问热度值相匹配的容量, 则增加所述虚拟子资源池的容量, 所述增加的 虚拟子资源池的容量不大于所述减少的虚拟子资源池的容量。
PCT/CN2011/084927 2010-12-30 2011-12-29 一种缓存分配方法及装置 WO2012089144A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201010616145.6 2010-12-30
CN2010106161456A CN102043732A (zh) 2010-12-30 2010-12-30 一种缓存分配方法及装置

Publications (1)

Publication Number Publication Date
WO2012089144A1 true WO2012089144A1 (zh) 2012-07-05

Family

ID=43909883

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2011/084927 WO2012089144A1 (zh) 2010-12-30 2011-12-29 一种缓存分配方法及装置

Country Status (2)

Country Link
CN (1) CN102043732A (zh)
WO (1) WO2012089144A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103067467A (zh) * 2012-12-21 2013-04-24 深信服网络科技(深圳)有限公司 缓存方法及装置
CN103530240A (zh) * 2013-10-25 2014-01-22 华为技术有限公司 数据块缓存方法和装置

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102043732A (zh) * 2010-12-30 2011-05-04 成都市华为赛门铁克科技有限公司 一种缓存分配方法及装置
CN102207830B (zh) * 2011-05-27 2013-06-12 杭州宏杉科技有限公司 一种缓存动态分配管理方法及装置
CN102262512A (zh) * 2011-07-21 2011-11-30 浪潮(北京)电子信息产业有限公司 一种实现磁盘阵列缓存分区管理的系统、装置及方法
CN103678414A (zh) * 2012-09-25 2014-03-26 腾讯科技(深圳)有限公司 一种存储及查找数据的方法及装置
CN103218179A (zh) * 2013-04-23 2013-07-24 深圳市京华科讯科技有限公司 基于虚拟化的二级系统加速方法
CN104349172B (zh) * 2013-08-02 2017-10-13 杭州海康威视数字技术股份有限公司 网络视频存储设备的集群管理方法及其装置
CN103744614B (zh) * 2013-12-17 2017-07-07 记忆科技(深圳)有限公司 固态硬盘访问的方法及其固态硬盘
CN106502576B (zh) * 2015-09-06 2020-06-23 中兴通讯股份有限公司 迁移策略调整方法及装置
CN106502578B (zh) * 2015-09-06 2019-06-11 中兴通讯股份有限公司 容量变更建议方法及装置
CN106201921A (zh) * 2016-07-18 2016-12-07 浪潮(北京)电子信息产业有限公司 一种缓存分区容量的调整方法及装置
CN107171792A (zh) * 2017-06-05 2017-09-15 北京邮电大学 一种虚拟密钥池及量子密钥资源的虚拟化方法
CN108762976A (zh) * 2018-05-30 2018-11-06 郑州云海信息技术有限公司 一种读取纠删码数据的方法、装置和存储介质
CN110908974A (zh) * 2018-09-14 2020-03-24 阿里巴巴集团控股有限公司 数据库管理方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1604028A (zh) * 2003-09-29 2005-04-06 株式会社日立制作所 存储系统和存储控制装置
CN1798094A (zh) * 2004-12-23 2006-07-05 华为技术有限公司 一种使用缓存区的方法
CN101609432A (zh) * 2009-07-13 2009-12-23 中国科学院计算技术研究所 共享缓存管理系统及方法
TW201039134A (en) * 2009-04-22 2010-11-01 Infortrend Technology Inc Data accessing method and apparatus for performing the same
CN102043732A (zh) * 2010-12-30 2011-05-04 成都市华为赛门铁克科技有限公司 一种缓存分配方法及装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100899462B1 (ko) * 2004-07-21 2009-05-27 비치 언리미티드 엘엘씨 블록 맵 캐싱 및 vfs 적층 가능 파일 시스템 모듈들에기초한 분산 저장 아키텍처
US7363454B2 (en) * 2004-12-10 2008-04-22 International Business Machines Corporation Storage pool space allocation across multiple locations
CN101620569A (zh) * 2008-07-03 2010-01-06 英业达股份有限公司 一种逻辑卷存储空间的扩展方法
CN101458613B (zh) * 2008-12-31 2011-04-20 成都市华为赛门铁克科技有限公司 一种混合分级阵列的实现方法、混合分级阵列和存储系统
CN101840308B (zh) * 2009-10-28 2014-06-18 创新科存储技术有限公司 一种分级存储系统及其逻辑卷管理方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1604028A (zh) * 2003-09-29 2005-04-06 株式会社日立制作所 存储系统和存储控制装置
CN1798094A (zh) * 2004-12-23 2006-07-05 华为技术有限公司 一种使用缓存区的方法
TW201039134A (en) * 2009-04-22 2010-11-01 Infortrend Technology Inc Data accessing method and apparatus for performing the same
CN101609432A (zh) * 2009-07-13 2009-12-23 中国科学院计算技术研究所 共享缓存管理系统及方法
CN102043732A (zh) * 2010-12-30 2011-05-04 成都市华为赛门铁克科技有限公司 一种缓存分配方法及装置

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103067467A (zh) * 2012-12-21 2013-04-24 深信服网络科技(深圳)有限公司 缓存方法及装置
CN103067467B (zh) * 2012-12-21 2016-08-03 深圳市深信服电子科技有限公司 缓存方法及装置
CN103530240A (zh) * 2013-10-25 2014-01-22 华为技术有限公司 数据块缓存方法和装置

Also Published As

Publication number Publication date
CN102043732A (zh) 2011-05-04

Similar Documents

Publication Publication Date Title
WO2012089144A1 (zh) 一种缓存分配方法及装置
US10346081B2 (en) Handling data block migration to efficiently utilize higher performance tiers in a multi-tier storage environment
US9766816B2 (en) Compression sampling in tiered storage
US9817765B2 (en) Dynamic hierarchical memory cache awareness within a storage system
US8954671B2 (en) Tiered storage device providing for migration of prioritized application specific data responsive to frequently referenced data
EP3552109B1 (en) Systems and methods for caching data
US10496280B2 (en) Compression sampling in tiered storage
US9274954B1 (en) Caching data using multiple cache devices
US20220382685A1 (en) Method and Apparatus for Accessing Storage System
US9424314B2 (en) Method and apparatus for joining read requests
US9355121B1 (en) Segregating data and metadata in a file system
US9330009B1 (en) Managing data storage
WO2014101108A1 (zh) 分布式存储系统的缓存方法、节点和计算机可读介质
US20150120859A1 (en) Computer system, and arrangement of data control method
US11314454B2 (en) Method and apparatus for managing storage device in storage system
CN110688062B (zh) 一种缓存空间的管理方法及装置
US10853252B2 (en) Performance of read operations by coordinating read cache management and auto-tiering
CN113900972A (zh) 一种数据传输的方法、芯片和设备
US8612674B2 (en) Systems and methods for concurrently accessing a virtual tape library by multiple computing devices
US20210133115A1 (en) Efficient memory usage for snapshots
US10242053B2 (en) Computer and data read method
US11347641B2 (en) Efficient memory usage for snapshots based on past memory usage
US20210311654A1 (en) Distributed Storage System and Computer Program Product
US11943314B2 (en) Cache retrieval based on tiered data
CN115509437A (zh) 存储系统、网卡、处理器、数据访问方法、装置及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11852314

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11852314

Country of ref document: EP

Kind code of ref document: A1