CN102043732A - Cache allocation method and device - Google Patents

Cache allocation method and device Download PDF

Info

Publication number
CN102043732A
CN102043732A CN2010106161456A CN201010616145A CN102043732A CN 102043732 A CN102043732 A CN 102043732A CN 2010106161456 A CN2010106161456 A CN 2010106161456A CN 201010616145 A CN201010616145 A CN 201010616145A CN 102043732 A CN102043732 A CN 102043732A
Authority
CN
China
Prior art keywords
resource pool
capacity
virtual subnet
subnet resource
temperature value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2010106161456A
Other languages
Chinese (zh)
Inventor
肖飞
林宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Digital Technologies Chengdu Co Ltd
Original Assignee
Huawei Symantec Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Symantec Technologies Co Ltd filed Critical Huawei Symantec Technologies Co Ltd
Priority to CN2010106161456A priority Critical patent/CN102043732A/en
Publication of CN102043732A publication Critical patent/CN102043732A/en
Priority to PCT/CN2011/084927 priority patent/WO2012089144A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the invention discloses a cache allocation method and device, which is used for avoiding negotiation process between a first controller and a second controller due to two-terminal control. The method comprises the following steps: a cache resource pool is divided into virtual sub-resource pools which are provided with the same number of logical units in advance; each virtual sub-resource pool is corresponding to a logical unit; and the cache resource storage of each virtual sub-resource pool is corresponding to service data of the logical unit. According to the embodiment of the invention, complicated communication negotiation process between the first controller and the second controller can be avoided, thereby guarantying data safety.

Description

A kind of cache allocation method and device
Technical field
The present invention relates to the communications field, particularly a kind of cache allocation method and device.
Background technology
Solid state hard disc (SSD, Solid State Disk or Solid State Drive), be also referred to as electronic hard disc or solid-state electronic dish, because solid state hard disc does not have the rotating media of common hard disk, thereby shock resistance is splendid, and the operating temperature range of its chip very wide (40 ℃~85 ℃), be widely used in military affairs at present, vehicle-mounted, industry control, video monitoring, network monitoring, the network terminal, electric power, medical treatment, aviation etc., fields such as navigator, SSD Cache applies to a kind of new application in the storage system with SSD, belong to L2 cache, it mainly utilizes SSD read-write response shorter, especially it is very short to read the response time, hot spot data is stored among the SSD, when these data of visit, can from SSD rather than from traditional magnetic disk, read, can improve the performance of system so greatly, 1-4 SSD disc formed SSD Cache resource pool, the SSD disc generally can only use for a side controller of storage system, after a side controller of storage system lost efficacy, the hot spot data that is stored in is wherein lost, thereby influences the overall performance of system.
In the prior art, the SSD Cache resource pool that the SSD disc is formed can use for two side controllers of storage system, even a side controller of system lost efficacy, another controller also can be taken over its business, can not influence the overall performance of system.
But in above-mentioned prior art, if belong to two or more logical block (LUN of two side controllers, Logical Unit Number) need visit the problem of data in place's cache resources simultaneously, just need between two side controllers so the conflict because of storage data and reading of data generation is communicated negotiation, serious consequences such as loss of data may appear causing unusually in the negotiations process more complicated.
Summary of the invention
The embodiment of the invention provides a kind of cache allocation method and device, can avoid the above LUN of two and two data in the access cache resource pool simultaneously, thereby avoid controlling communication negotiation process complicated between two side controllers, ensures the safety of data.
A kind of cache allocation method that the embodiment of the invention provides comprises:
The logical block that the business datum of determining to get access to need be stored; Search and described logical block corresponding virtual child resource pond; Described business datum is stored in the included cache resources of the virtual subnet resource pool that finds; Wherein, the cache resources pond is divided into the virtual subnet resource pool that equates with described number of logic cells in advance, the corresponding different logical block of each virtual subnet resource pool, and the business datum of the included cache resources storage counterlogic unit of each virtual subnet resource pool.
A kind of buffer memory distributor that the embodiment of the invention provides comprises:
Determining unit is used to determine the logical block that the business datum that gets access to need be stored; Search the unit, be used to search and described logical block corresponding virtual child resource pond; Storage unit is used for described business datum is stored in the included cache resources of virtual subnet resource pool that finds; Division unit is used for the cache resources pond and is divided into the virtual subnet resource pool that equates with described number of logic cells in advance.
As can be seen from the above technical solutions, the embodiment of the invention has the following advantages: the quantity according to LUN is divided into the virtual subnet resource pool that quantity equates with the cache resources pond, each virtual subnet resource pool is corresponding one by one with different logical block LUN, each virtual subnet resource pool only supplies each self-corresponding LUN access cache data, therefore avoid respectively visiting same data cached simultaneously from two of two side controllers and two above LUN, carry out complicated communication negotiation process thereby avoid controlling between two side controllers for visiting same cache resources data, ensure the safety of data.
Description of drawings
Fig. 1 is an embodiment synoptic diagram of cache allocation method in the embodiment of the invention;
Fig. 2 is the caching system structural representation in the embodiment of the invention buffer memory assigning process;
Fig. 3 is another embodiment synoptic diagram of cache allocation method in the embodiment of the invention;
Fig. 4 is an embodiment synoptic diagram of buffer memory distributor in the embodiment of the invention.
Embodiment
The embodiment of the invention provides a kind of cache allocation method and device, the cache resources pond can be used for two side controllers of storage system are balanced, even a side controller of system lost efficacy, another controller also can be taken over its business, because of realizing that two controls improve the entire system performance, are elaborated respectively below.
See also Fig. 1, cache allocation method embodiment comprises in the embodiment of the invention:
101, the logical block that to store of the business datum of determining to get access to;
All types of business datums all need among the logical block LUN of the system that stores into, and LUN is unique, and the traffic data type of Different L UN may be identical.
In the embodiment of the invention, at first to determine the LUN of the business datum correspondence that acquires.
102, search and logical block corresponding virtual child resource pond;
In the embodiment of the invention, SSD Cache cache resources is divided into the virtual subnet resource pool that equates with number of logic cells in advance, each virtual subnet resource pool is corresponding with a different logical block, the data that are each virtual subnet resource pool are only for a LUN visit, and the business datum of the included cache resources of each virtual subnet resource pool storage counterlogic unit, each LUN all is independent of other LUN to the visit of data in its corresponding virtual child resource pond carries out, but the data of each virtual subnet resource pool are had an opportunity for any LUN visit.
Need to prove that the initial capacity of each SSD Cache virtual subnet resource pool can be identical, also can be different, but each virtual subnet resource pool can only use the capacity of division.
103, business datum is stored in the included cache resources of the virtual subnet resource pool that finds.
According in step 102, find with LUN corresponding virtual child resource pond after, with also corresponding being stored in the included cache resources in the different virtual child resource pond of adhering to Different L UN separately of business datum.
In the embodiment of the invention, determine the logical block that business datum will be stored, search logical block corresponding virtual child resource pond also wherein with this business datum storage, because SSD Cache cache resources is divided into the virtual subnet resource pool that equates with number of logic cells in advance, each virtual subnet resource pool is corresponding with a different logical block, therefore it is same data cached to avoid respectively a plurality of LUN from two side controllers to visit simultaneously, carries out complicated communication negotiation process thereby avoid controlling between two side controllers for visiting same cache resources data.
In the embodiment of the invention, caching system structural representation in the buffer memory assigning process sees also Fig. 2, caching system has two side controllers, 201 is first controller, 202 is second controller, the 203rd, the operation layer of caching system, LUN0, LUN1 and LUN2 are the business of operation layer, and wherein LUN0 and LUN1 are by 201 controls of first controller, and LUN2 is by 202 controls of second controller, the 204th, the resource layer of caching system, wherein, 208 is solid state hard disc cache resources layer, is made up of solid state hard disc, quantity according to different LUN business, SSD Cache resource pool is divided into the SSD Cache virtual subnet resource pool of varying number, specifically is divided into the first virtual subnet resource pool, 205, the second virtual subnet resource pools 206, the 3rd virtual subnet resource pool 207, this each virtual subnet resource pool correspondence LUN business separately, as shown in the figure, the first virtual subnet resource pool, 205 corresponding LUN0, the second virtual subnet resource pool, 206 corresponding LUN1, the 3rd virtual subnet resource pool 207 corresponding LUN2.
For the ease of understanding, with another embodiment the cache allocation method in the embodiment of the invention is described in detail below, see also Fig. 3, another embodiment of the cache allocation method in the embodiment of the invention comprises:
301~303, the content of step 301 to 303 in the invention process sees also that step 101 repeats no more to 103 described contents among the earlier figures 1 described embodiment herein.
304, when duration is preset in arrival, obtain the visit temperature value of the data of storing in the virtual subnet resource pool of being divided;
In SSD Cache resource pool system, the adjustment thread can be set, set in advance certain duration, every visit temperature value that reaches the data of storing in the virtual subnet resource pool that this duration that presets then divided in the obtaining step 101, the setting of this duration is relevant with actual application, and the concrete numerical value of duration does not limit herein.
Need to prove that visit temperature value comprises the hot spot data quantity of storing in the access frequency of the data of storing in the virtual subnet resource pool and the virtual subnet resource pool, access frequency is high more, and hot spot data quantity is many more, and expression visit temperature value is high more.
What need further specify is that the hot spot data quantity of storing in the access frequency of storage data and the virtual subnet resource pool in the virtual subnet resource pool all can be counted by the counter of internal system, is specially technology as well known to those skilled in the art, repeats no more herein.
305, the capacity with the virtual subnet resource pool is adjusted to the capacity that is complementary with current storage data access temperature value;
According to the matching relationship of visit temperature value that presets and virtual subnet resource pool capacity, the capacity of virtual subnet resource pool is adjusted to the capacity that is complementary with current storage data access temperature value.
Visit temperature value can reflect the frequent degree of visit storage data to a certain extent, the general frequent more cache resources that then needs of visit data is many more, therefore can be set to, the visit temperature value then capacity in corresponding virtual child resource pond is also big, the capacity in the little then corresponding virtual of visit temperature value child resource pond is also little, concrete setting up procedure is relevant with actual application, does not limit herein.
In system, can set in advance the corresponding relation of the capacity of the visit temperature value of the data of storing in the virtual subnet resource pool and virtual subnet resource pool, in actual applications, this corresponding relation generally is not set to the correspondence between the concrete numerical value, but be set to the correspondence of numerical value in two scopes, for example, when visit temperature value is 50~100, the capacity in its corresponding virtual child resource pond is in 30 megabyte to 60 megabyte, if the capacity of certain virtual subnet resource pool is 40 megabyte so, current storage data access temperature value is 60, can determine that then the capacity of virtual subnet resource pool of this moment and current storage data access temperature value are to coupling.
Concrete, if the matching relationship of the visit temperature value that basis presets and the capacity of virtual subnet resource pool, the capacity of certain virtual subnet resource pool is higher than the capacity that is complementary with current storage data access temperature value as can be known, so from storage data access temperature value angle analysis, show that then current storage data access temperature value is lower, the visit data frequency is low, current virtual subnet resource pool capacity is bigger than normal, do not match with storage data access temperature value, for saving cache resources, then reduce the capacity of this virtual subnet resource pool, the capacity that comes out because of adjustment is vacant can use for other virtual subnet resource pools that need increase capacity.
If the matching relationship of the visit temperature value that basis presets and the capacity of virtual subnet resource pool, the capacity of certain virtual subnet resource pool is lower than the capacity that is complementary with current storage data access temperature value as can be known, so from storage data access temperature value angle analysis, then show current storage data access temperature value height, visit data frequency height, current virtual subnet resource pool capacity is less than normal, possibly can't provide enough capacity to satisfy the demand of storage data, then increase the capacity of this virtual subnet resource pool, can be from higher when the capacity of certain virtual subnet resource pool and draw the capacity that is reduced and get.
306, the non-hot spot data in the deletion virtual subnet resource pool.
Because each virtual subnet resource pool can only use the cache resources that is allocated to self, so, when if the current capacity that is allocated to the virtual subnet resource pool is lower than the capacity that is complementary with current storage data access temperature value, when the virtual subnet resource pool does not have to supply the capacity of business datum storage, can obtain more vacant capacity by the non-hot spot data of deleting in this virtual subnet resource pool.
Concrete, when the virtual subnet resource pool does not have to supply the capacity of business datum storage, the access frequency of data in this virtual subnet resource pool can be sorted, sortord can be from high to low, also can be from low to high, delete sort one or more data at end of access frequency then, the data bulk of concrete deletion, relevant with actual application, do not do concrete qualification herein.
Need to prove that the access frequency of data can be counted by system's inside counting device in the virtual subnet resource pool, is specially technology as well known to those skilled in the art, repeats no more herein.
In the embodiment of the invention, also need following initial configuration is carried out in the cache resources pond:
SSD Cache resource pool is divided into a plurality of data blocks according to the type of business, because the LUN corresponding service has polytype, and the needed buffer memory capacity difference of dissimilar business datums, in general, the buffer memory capacity that needs from the hot spot data that business produced of web-page requests is little, and the buffer memory capacity that needs from the hot spot data that video request or data block business are produced is big, accordingly, divide the cache resources pond according to business datum desired volume size, or the pairing virtual subnet resource pool of logical block capacity adjusted, the buffer memory capacity that makes the storage data need the big traffic assignments of buffer memory capacity to arrive is big, and the buffer memory capacity that the storage data need the little traffic assignments of buffer memory capacity to arrive is little.
In the embodiment of the invention, certain when presetting duration when arriving, matching relationship according to visit temperature value that presets and virtual subnet resource pool capacity, the capacity of virtual subnet resource pool is adjusted to the capacity that is complementary with current storage data access temperature value, constantly dynamically adjust the capacity of each virtual subnet resource pool, make to the more realistic application of the capacity allocation of virtual subnet resource pool, more reasonable to the application in cache resources pond.
The embodiment of the invention also provides a kind of buffer memory distributor, sees also Fig. 4, and the buffer memory distributor embodiment comprises in the embodiment of the invention:
Determining unit 401 is used to determine the logical block that the business datum that gets access to need be stored, and determines logical block corresponding service data type;
Search unit 402, be used to search and logical block corresponding virtual child resource pond;
Storage unit 403 is used for business datum is stored in the included cache resources of virtual subnet resource pool that finds;
Division unit 404 is used for the cache resources pond and is divided into the virtual subnet resource pool that equates with number of logic cells in advance.
Buffer memory distributor in the present embodiment can further include:
Acquiring unit 405 is used for obtaining the visit temperature value of the data of storing in the virtual subnet resource pool of being divided when duration is preset in arrival;
Adjustment unit 406, be used for according to the visit temperature value that presets and the matching relationship of virtual subnet resource pool capacity, the capacity of virtual subnet resource pool is adjusted to the capacity that is complementary with current storage data access temperature value, also is used for the pairing virtual subnet resource pool of logical block capacity being adjusted according to traffic data type;
Sequencing unit 407 is used for not having can be for the capacity of business datum storage the time when the virtual subnet resource pool, and the access frequency of data in the virtual subnet resource pool is sorted;
Delete cells 408 is used to delete the minimum data of access frequency.
Need to prove that the adjustment unit 406 in the present embodiment can further include:
First adjustment unit 4061 is used for then reducing the capacity of this virtual subnet resource pool if the capacity of virtual subnet resource pool is higher than the capacity that is complementary with current storage data access temperature value;
Second adjustment unit 4062, be used for if the capacity of virtual subnet resource pool is lower than the capacity that is complementary with current storage data access temperature value, then increase the capacity of this virtual subnet resource pool, the capacity of the virtual subnet resource pool of increase is not more than the capacity of the virtual subnet resource pool of minimizing.
For the ease of understanding, with a concrete application scenarios contact between each unit of buffer memory distributor in the present embodiment is described below.
In the embodiment of the invention, determining unit 401 is determined logical block corresponding service data types, and determining unit 401 business datum determining the to get access to logical block LUN that need store, and searches unit 402 and searches and logical block corresponding virtual child resource pond.
Need to prove, SSD Cache cache resources is divided unit 404 and is divided into the virtual subnet resource pool that equates with number of logic cells in advance, each virtual subnet resource pool is corresponding with a different logical block, the data that are each virtual subnet resource pool are only for a LUN visit, and the business datum of the included cache resources of each virtual subnet resource pool storage counterlogic unit, each LUN all is independent of other LUN to the visit of data in its corresponding virtual child resource pond carries out, but the data of each virtual subnet resource pool are had an opportunity for any LUN visit.
Storage unit 403 is stored in business datum in the included cache resources of the virtual subnet resource pool that finds, storing process can referring to aforementioned embodiment illustrated in fig. 1 in the described related content of step 103, repeat no more herein.
When duration is preset in arrival, acquiring unit 405 obtains the visit temperature value of the data of storing in the virtual subnet resource pool of being divided, in SSD Cache resource pool system, the adjustment thread can be set, set in advance certain duration, everyly reach the visit temperature value that this duration that presets then obtains the data of storing in the virtual subnet resource pool of being divided, the setting of this duration is relevant with actual application, and the concrete numerical value of duration does not limit herein.
Need to prove, visit temperature value comprises the hot spot data quantity of storing in the access frequency of the data of storing in the virtual subnet resource pool and the virtual subnet resource pool, access frequency is high more, hot spot data quantity is many more, expression visit temperature value is high more, the hot spot data quantity of storing in the access frequency of storage data and the virtual subnet resource pool in the virtual subnet resource pool all can be counted by the counter of internal system, is specially technology as well known to those skilled in the art, repeats no more herein.
Matching relationship according to visit temperature value that presets and virtual subnet resource pool capacity, adjustment unit 406 is adjusted to the capacity that is complementary with current storage data access temperature value with the capacity of virtual subnet resource pool, wherein, if the capacity of virtual subnet resource pool is higher than the capacity that is complementary with current storage data access temperature value, then first adjustment unit 4061 reduces the capacity of this virtual subnet resource pool, if the capacity of virtual subnet resource pool is lower than the capacity that is complementary with current storage data access temperature value, then second adjustment unit 4062 increases the capacity of this virtual subnet resource pool, the capacity of the virtual subnet resource pool that increases is not more than the capacity of the virtual subnet resource pool of minimizing, concrete adjustment process sees also the aforementioned middle described content of step 305 embodiment illustrated in fig. 3, repeats no more herein.
Because each virtual subnet resource pool can only use the cache resources that is allocated to self, so, when if the current capacity that is allocated to the virtual subnet resource pool is lower than the capacity that is complementary with current storage data access temperature value, when the virtual subnet resource pool does not have to supply the capacity of business datum storage, can obtain more vacant capacity by the non-hot spot data in delete cells 408 these virtual subnet resource pools of deletion, concrete, sort by the access frequency of sequencing unit 407 data in this virtual subnet resource pool, sortord can be from high to low, also can be from low to high, then by sort one or more data at end of delete cells 408 deletion access frequencys, the data bulk of concrete deletion, relevant with actual application, do not do concrete qualification herein.
Need to prove that the access frequency of data can be counted by system's inside counting device in the virtual subnet resource pool, is specially technology as well known to those skilled in the art, repeats no more herein.
In the embodiment of the invention, SSD Cache cache resources is divided unit 404 and is divided into the virtual subnet resource pool that equates with number of logic cells in advance, each virtual subnet resource pool is corresponding with a different logical block, the data that are each virtual subnet resource pool are only for a LUN visit, and the business datum of the included cache resources of each virtual subnet resource pool storage counterlogic unit, because each virtual subnet resource pool is corresponding with a different logical block, therefore avoid respectively visiting same data cached simultaneously from a plurality of LUN of two side controllers, thereby avoid controlling between two side controllers and carry out complicated communication negotiation process for visiting same cache resources data, matching relationship according to visit temperature value that presets and virtual subnet resource pool capacity, adjustment unit 406 is adjusted to the capacity that is complementary with current storage data access temperature value with the capacity of virtual subnet resource pool, constantly dynamically adjust the capacity of each virtual subnet resource pool, make the more realistic application of the capacity allocation of virtual subnet resource pool.
Cache resources in the foregoing description is an example with SSD Cache resource, is understandable that, also can be applicable to the storage resources of other same types, and the particular type that cache resources is used is relevant with actual application, does not limit herein.
One of ordinary skill in the art will appreciate that all or part of step that realizes in the foregoing description method is to instruct relevant hardware to finish by program, described program can be stored in a kind of computer-readable recording medium, the above-mentioned storage medium of mentioning can be a ROM (read-only memory), disk or CD etc.
More than a kind of distribution caching method provided by the present invention and device are described in detail, for one of ordinary skill in the art, thought according to the embodiment of the invention, part in specific embodiments and applications all can change, in sum, this description should not be construed as limitation of the present invention.

Claims (10)

1. a cache allocation method is characterized in that, comprising:
The logical block that the business datum of determining to get access to need be stored;
Search and described logical block corresponding virtual child resource pond;
Described business datum is stored in the included cache resources of the virtual subnet resource pool that finds;
Wherein, the cache resources pond is divided into the virtual subnet resource pool that equates with described number of logic cells in advance, the corresponding different logical block of each virtual subnet resource pool, and the business datum of the included cache resources storage counterlogic unit of each virtual subnet resource pool.
2. want 1 described method according to right, it is characterized in that, also comprise:
When duration is preset in arrival, obtain the visit temperature value of the data of storing in the virtual subnet resource pool of being divided;
According to the matching relationship of visit temperature value that presets and virtual subnet resource pool capacity, the capacity of virtual subnet resource pool is adjusted to the capacity that is complementary with current storage data access temperature value.
3. method according to claim 1 is characterized in that, described method also comprises:
When the virtual subnet resource pool does not have can be for the capacity of business datum storage the time, the access frequency of data in the described virtual subnet resource pool is sorted;
Delete the one or more data in end in the described access frequency ordering.
4. according to any described method in the claim 1 to 3, it is characterized in that, also comprise:
Determine logical block corresponding service data type;
According to traffic data type the pairing virtual subnet resource pool of logical block capacity is adjusted.
5. according to any described method in the claim 1 to 3, it is characterized in that, the matching relationship of visit temperature value that described basis presets and virtual subnet resource pool capacity is adjusted to the capacity that is complementary with current storage data access temperature value with the capacity of virtual subnet resource pool and comprises:
If the capacity of described virtual subnet resource pool is higher than the capacity that is complementary with current storage data access temperature value, then reduce the capacity of described virtual subnet resource pool;
If the capacity of described virtual subnet resource pool is lower than the capacity that is complementary with current storage data access temperature value, then increase the capacity of described virtual subnet resource pool, the capacity of the virtual subnet resource pool of described increase is not more than the capacity of the virtual subnet resource pool of described minimizing.
6. a buffer memory distributor is characterized in that, comprising:
Determining unit is used to determine the logical block that the business datum that gets access to need be stored;
Search the unit, be used to search and described logical block corresponding virtual child resource pond;
Storage unit is used for described business datum is stored in the included cache resources of virtual subnet resource pool that finds;
Division unit is used for the cache resources pond and is divided into the virtual subnet resource pool that equates with described number of logic cells in advance.
7. device according to claim 6 is characterized in that, described device also comprises:
Acquiring unit is used for obtaining the visit temperature value of the data of storing in the virtual subnet resource pool of being divided when duration is preset in arrival;
Adjustment unit is used for according to the visit temperature value that presets and the matching relationship of virtual subnet resource pool capacity, and the capacity of virtual subnet resource pool is adjusted to the capacity that is complementary with current storage data access temperature value.
8. device according to claim 6 is characterized in that, described device also comprises:
Sequencing unit is used for not having can be for the capacity of business datum storage the time when the virtual subnet resource pool, and the access frequency of data in the described virtual subnet resource pool is sorted;
Delete cells is used for deleting the one or more data in described access frequency ordering end.
9. device according to claim 7 is characterized in that,
Described determining unit also is used for determining logical block corresponding service data type;
Described adjustment unit also is used for according to traffic data type the pairing virtual subnet resource pool of logical block capacity being adjusted.
10. according to any described device of claim 6 to 9, it is characterized in that described adjustment unit comprises:
First adjustment unit is used for then reducing the capacity of described virtual subnet resource pool if the capacity of described virtual subnet resource pool is higher than the capacity that is complementary with current storage data access temperature value;
Second adjustment unit, be used for if the capacity of described virtual subnet resource pool is lower than the capacity that is complementary with current storage data access temperature value, then increase the capacity of described virtual subnet resource pool, the capacity of the virtual subnet resource pool of described increase is not more than the capacity of the virtual subnet resource pool of described minimizing.
CN2010106161456A 2010-12-30 2010-12-30 Cache allocation method and device Pending CN102043732A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN2010106161456A CN102043732A (en) 2010-12-30 2010-12-30 Cache allocation method and device
PCT/CN2011/084927 WO2012089144A1 (en) 2010-12-30 2011-12-29 Cache allocation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010106161456A CN102043732A (en) 2010-12-30 2010-12-30 Cache allocation method and device

Publications (1)

Publication Number Publication Date
CN102043732A true CN102043732A (en) 2011-05-04

Family

ID=43909883

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010106161456A Pending CN102043732A (en) 2010-12-30 2010-12-30 Cache allocation method and device

Country Status (2)

Country Link
CN (1) CN102043732A (en)
WO (1) WO2012089144A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102207830A (en) * 2011-05-27 2011-10-05 杭州宏杉科技有限公司 Cache dynamic allocation management method and device
CN102262512A (en) * 2011-07-21 2011-11-30 浪潮(北京)电子信息产业有限公司 System, device and method for realizing disk array cache partition management
WO2012089144A1 (en) * 2010-12-30 2012-07-05 成都市华为赛门铁克科技有限公司 Cache allocation method and device
CN103218179A (en) * 2013-04-23 2013-07-24 深圳市京华科讯科技有限公司 Second-level system acceleration method based on virtualization
CN103678414A (en) * 2012-09-25 2014-03-26 腾讯科技(深圳)有限公司 Method and device for storing and inquiring data
CN103744614A (en) * 2013-12-17 2014-04-23 记忆科技(深圳)有限公司 Method for accessing solid state disc and solid state disc thereof
CN104349172A (en) * 2013-08-02 2015-02-11 杭州海康威视数字技术股份有限公司 Cluster management method of network video recorder and device thereof
CN106201921A (en) * 2016-07-18 2016-12-07 浪潮(北京)电子信息产业有限公司 The method of adjustment of a kind of cache partitions capacity and device
CN106502576A (en) * 2015-09-06 2017-03-15 中兴通讯股份有限公司 Migration strategy method of adjustment, capacity change suggesting method and device
WO2017036428A3 (en) * 2015-09-06 2017-04-13 中兴通讯股份有限公司 Capacity change suggestion method and device
CN107171792A (en) * 2017-06-05 2017-09-15 北京邮电大学 A kind of virtual key pond and the virtual method of quantum key resource
CN108762976A (en) * 2018-05-30 2018-11-06 郑州云海信息技术有限公司 A kind of method, apparatus and storage medium reading correcting and eleting codes data
CN110908974A (en) * 2018-09-14 2020-03-24 阿里巴巴集团控股有限公司 Database management method, device, equipment and storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103067467B (en) * 2012-12-21 2016-08-03 深圳市深信服电子科技有限公司 Caching method and device
CN103530240B (en) * 2013-10-25 2016-09-07 华为技术有限公司 Data-block cache method and apparatus

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060064536A1 (en) * 2004-07-21 2006-03-23 Tinker Jeffrey L Distributed storage architecture based on block map caching and VFS stackable file system modules
CN101044483A (en) * 2004-12-10 2007-09-26 国际商业机器公司 Storage pool space allocation across multiple locations
CN101458613A (en) * 2008-12-31 2009-06-17 成都市华为赛门铁克科技有限公司 Method for implementing mixed hierarchical array, the hierarchical array and storage system
CN101620569A (en) * 2008-07-03 2010-01-06 英业达股份有限公司 Expansion method of logical volume storage space
CN101840308A (en) * 2009-10-28 2010-09-22 创新科存储技术有限公司 Hierarchical memory system and logical volume management method thereof

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4307202B2 (en) * 2003-09-29 2009-08-05 株式会社日立製作所 Storage system and storage control device
CN1798094A (en) * 2004-12-23 2006-07-05 华为技术有限公司 Method of using buffer area
US9223516B2 (en) * 2009-04-22 2015-12-29 Infortrend Technology, Inc. Data accessing method and apparatus for performing the same using a host logical unit (HLUN)
CN101609432B (en) * 2009-07-13 2011-04-13 中国科学院计算技术研究所 Shared cache management system and method thereof
CN102043732A (en) * 2010-12-30 2011-05-04 成都市华为赛门铁克科技有限公司 Cache allocation method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060064536A1 (en) * 2004-07-21 2006-03-23 Tinker Jeffrey L Distributed storage architecture based on block map caching and VFS stackable file system modules
CN101044483A (en) * 2004-12-10 2007-09-26 国际商业机器公司 Storage pool space allocation across multiple locations
CN101620569A (en) * 2008-07-03 2010-01-06 英业达股份有限公司 Expansion method of logical volume storage space
CN101458613A (en) * 2008-12-31 2009-06-17 成都市华为赛门铁克科技有限公司 Method for implementing mixed hierarchical array, the hierarchical array and storage system
CN101840308A (en) * 2009-10-28 2010-09-22 创新科存储技术有限公司 Hierarchical memory system and logical volume management method thereof

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012089144A1 (en) * 2010-12-30 2012-07-05 成都市华为赛门铁克科技有限公司 Cache allocation method and device
CN102207830A (en) * 2011-05-27 2011-10-05 杭州宏杉科技有限公司 Cache dynamic allocation management method and device
CN102207830B (en) * 2011-05-27 2013-06-12 杭州宏杉科技有限公司 Cache dynamic allocation management method and device
CN102262512A (en) * 2011-07-21 2011-11-30 浪潮(北京)电子信息产业有限公司 System, device and method for realizing disk array cache partition management
CN103678414A (en) * 2012-09-25 2014-03-26 腾讯科技(深圳)有限公司 Method and device for storing and inquiring data
CN103218179A (en) * 2013-04-23 2013-07-24 深圳市京华科讯科技有限公司 Second-level system acceleration method based on virtualization
CN104349172B (en) * 2013-08-02 2017-10-13 杭州海康威视数字技术股份有限公司 The cluster management method and its device of Internet video storage device
CN104349172A (en) * 2013-08-02 2015-02-11 杭州海康威视数字技术股份有限公司 Cluster management method of network video recorder and device thereof
CN103744614A (en) * 2013-12-17 2014-04-23 记忆科技(深圳)有限公司 Method for accessing solid state disc and solid state disc thereof
CN103744614B (en) * 2013-12-17 2017-07-07 记忆科技(深圳)有限公司 Method and its solid state hard disc that solid state hard disc is accessed
CN106502576A (en) * 2015-09-06 2017-03-15 中兴通讯股份有限公司 Migration strategy method of adjustment, capacity change suggesting method and device
WO2017036428A3 (en) * 2015-09-06 2017-04-13 中兴通讯股份有限公司 Capacity change suggestion method and device
CN106502576B (en) * 2015-09-06 2020-06-23 中兴通讯股份有限公司 Migration strategy adjusting method and device
CN106201921A (en) * 2016-07-18 2016-12-07 浪潮(北京)电子信息产业有限公司 The method of adjustment of a kind of cache partitions capacity and device
CN107171792A (en) * 2017-06-05 2017-09-15 北京邮电大学 A kind of virtual key pond and the virtual method of quantum key resource
CN108762976A (en) * 2018-05-30 2018-11-06 郑州云海信息技术有限公司 A kind of method, apparatus and storage medium reading correcting and eleting codes data
CN110908974A (en) * 2018-09-14 2020-03-24 阿里巴巴集团控股有限公司 Database management method, device, equipment and storage medium

Also Published As

Publication number Publication date
WO2012089144A1 (en) 2012-07-05

Similar Documents

Publication Publication Date Title
CN102043732A (en) Cache allocation method and device
CN109783438B (en) Distributed NFS system based on librados and construction method thereof
CN102023813B (en) Application and tier configuration management in dynamic page realloction storage system
US10423535B2 (en) Caching and tiering for cloud storage
US9411530B1 (en) Selecting physical storage in data storage systems
US9880780B2 (en) Enhanced multi-stream operations
CN102426552B (en) Storage system service quality control method, device and system
US20200204504A1 (en) Packet processing system, method and device having reduced static power consumption
CN107077882B (en) DRAM refreshing method, device and system
JP2013527942A (en) Storage control device or storage system having a plurality of storage control devices
CN103576835A (en) Data manipulation method and device for sleep disk
US8732421B2 (en) Storage system and method for reallocating data
KR20190058992A (en) Server for distributed file system based on torus network and method using the same
CN101645837A (en) Method and device for realizing load balancing
CN103902475A (en) Solid state disk concurrent access method and device based on queue management mechanism
CN107341114A (en) A kind of method of directory management, Node Controller and system
CN102298556A (en) Data stream recognition method and device
US10254973B2 (en) Data management system and method for processing distributed data
CN102063263A (en) Method, device and system for responding read-write operation request of host computer by solid state disk
CN111007988A (en) RAID internal wear balancing method, system, terminal and storage medium
CN107408071A (en) A kind of memory pool access method, device and system
CN106326143A (en) Cache distribution, data access and data sending method, processor and system
EP4044039A1 (en) Data access method and apparatus, and storage medium
CN102981782B (en) Data processing method and device
CN115129709A (en) Data processing method, server and system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C53 Correction of patent of invention or patent application
CB02 Change of applicant information

Address after: 611731 Chengdu high tech Zone, Sichuan, West Park, Qingshui River

Applicant after: Huawei Symantec Technologies Co., Ltd.

Address before: 611731 Chengdu high tech Zone, Sichuan, West Park, Qingshui River

Applicant before: Chengdu Huawei Symantec Technologies Co., Ltd.

COR Change of bibliographic data

Free format text: CORRECT: APPLICANT; FROM: CHENGDU HUAWEI SYMANTEC TECHNOLOGIES CO., LTD. TO: HUAWEI DIGITAL TECHNOLOGY (CHENGDU) CO., LTD.

C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20110504