CN109840217B - Cache resource allocation and device - Google Patents

Cache resource allocation and device Download PDF

Info

Publication number
CN109840217B
CN109840217B CN201711213099.3A CN201711213099A CN109840217B CN 109840217 B CN109840217 B CN 109840217B CN 201711213099 A CN201711213099 A CN 201711213099A CN 109840217 B CN109840217 B CN 109840217B
Authority
CN
China
Prior art keywords
quota
volume
cache
allocated
target volume
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711213099.3A
Other languages
Chinese (zh)
Other versions
CN109840217A (en
Inventor
吴超
潘浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201711213099.3A priority Critical patent/CN109840217B/en
Publication of CN109840217A publication Critical patent/CN109840217A/en
Application granted granted Critical
Publication of CN109840217B publication Critical patent/CN109840217B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A method and a device for allocating cache resources comprise the steps of receiving a data writing request, wherein the data writing request is used for writing data to be written into a target volume, the storage device comprises a plurality of volumes and a cache, the space of the cache comprises a private space and a public space, the private space is divided into a plurality of parts, each part is allocated to one of the volumes as a quota, and the target volume is one of the volumes. And judging whether the debt information of the target volume is larger than zero or not, wherein the debt information is used for indicating whether the quota distributed for the target volume is overdraft or not. And if the debt information of the target volume is greater than zero, allocating cache resources for the data writing request from the public space. The efficiency of processing the service by the storage device can be improved.

Description

Cache resource allocation and device
Technical Field
The application relates to the technical field of storage, in particular to a cache resource allocation and device.
Background
The storage device may apply for cache resources when processing the service request. It is common practice to consider the buffer space as a buffer resource pool from which buffer resources are allocated for service requests on a first-come-first-served basis. However, the service with a slow resource release rate can perform certain suppression on the service with a fast resource release rate at the resource level, so that the efficiency of processing the service by the storage device is affected.
Disclosure of Invention
The cache resource allocation method and the cache resource allocation device can improve the service processing efficiency of the storage device.
The first aspect provides a cache resource allocation method. In the method, a processor receives a write data request for writing data to be written to a target volume, the storage device comprising a plurality of volumes and a cache, the space of the cache comprising a private space and a public space, the private space being divided into a number of portions, each portion being allocated as a quota to one of the plurality of volumes, the target volume being one of the plurality of volumes. The processor further determines whether liability information for the target volume is greater than zero, the liability information being indicative of whether a quota allocated for the target volume has been overdraft. And if the debt information of the target volume is greater than zero, allocating cache resources for the data writing request from the public space.
According to a first aspect, the cached space comprises a private space and a public space, the private space being divided into several parts, each part being allocated as a quota to one of the volumes. When a write data request is received, it is determined how to allocate cache resources for the write data request based on liability information of a target volume to be accessed. And if the debt information of the target volume is greater than zero, allocating cache resources for the data writing request from the public space. The debt information being greater than zero means that other volumes are still under some quota of the target volume, so that for the write data request, cache resources can be allocated for the write data request from a public space, the quota of the target volume is prevented from being further occupied to influence efficiency, and therefore processing efficiency of the write data request for the target volume is improved.
With reference to the first aspect, in a first implementation manner of the first aspect, when releasing the buffer resource allocated for the write data request, the processor adds the released buffer resource to a quota allocated for the target volume. According to a first embodiment, when the liability information is greater than 0, it is stated that other volumes are under some quota of the target volume, and the released cache resources are added to the quota allocated to the target volume, so that the situation that the processing efficiency of data writing requests for the target volume is compromised due to the fact that the quota of the other volumes under the target volume is more is avoided.
With reference to the first aspect, in a second implementation manner of the first aspect, the liability information is equal to a value obtained by updating a reference quota after applying for or releasing a cache resource in a current period, the reference quota is equal to a difference value obtained by subtracting an old quota from a new quota, the new quota is a quota allocated to the target volume in the current period, and the old quota is a quota allocated to the target volume in a history period.
With reference to the first aspect, in a third implementation manner of the first aspect, each volume corresponds to a buffer queue, where one or pending write data requests are stored. The processor allocates the quota for the volume according to the number of write data requests and the size of the write data requests contained in the buffer queue corresponding to each volume, wherein the more the number of write data requests contained in the buffer queue is, the more the quota is allocated for the corresponding volume of the buffer queue, the larger the size of the write data requests contained in the buffer queue is, and the more the quota is allocated for the corresponding volume of the buffer queue. According to the third embodiment, quota is allocated to each volume according to service pressure of the volume, and buffer resources incline to the volume with large service pressure, so that service efficiency of the storage device is improved.
With reference to the first aspect, in a fourth implementation manner of the first aspect, the processor allocates the quota to the volume according to a cache concurrency number of each volume, where the cache concurrency number is used to indicate a number of write requests capable of concurrency writing to the volume when data in the cache is copied to the hard disk. According to the fourth embodiment, quota is allocated to each volume according to the corresponding cache concurrency number of the volume, and the cache resources incline to the volume with high concurrency capacity, so that the efficiency of the storage device service business is improved.
With reference to the second implementation manner of the first aspect, in a third implementation manner of the first aspect, the history period is a period previous to the current previous period.
A second aspect of the present application provides a cache resource allocation apparatus, where the apparatus is configured to perform the method in any one of the implementation manners of the first aspect and the first aspect.
A third aspect of the application provides a storage device comprising a processor and a cache. The processor invokes program code in a cache to implement the method of the first aspect and any implementation of the first aspect.
A fourth aspect of the application provides a computer program product comprising a computer readable storage medium storing program code comprising instructions executable by at least one of the methods of the first aspect.
Drawings
In order to more clearly illustrate the technical solution of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described.
FIG. 1 is a diagram of a system architecture provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of a buffer space provided in an embodiment of the present application;
FIG. 3 is a flowchart illustrating a method for allocating cache resources according to an embodiment of the present application;
FIG. 4 is a flowchart illustrating a method for releasing a cache resource according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a buffer resource allocation device according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings.
Fig. 1 is a system architecture diagram provided in an embodiment of the present application, where a storage system provided in the embodiment includes a host 20, a controller 11, and a plurality of hard disks 22. The controller 11 may be a computing device such as a server, desktop computer, or the like. An operating system and application programs are installed on the controller 11. The controller is connected to the host 20 through a storage area network (storage area network, SAN). Specifically, the host 20 may send a write data request to the controller 11, and after the controller 11 receives the write data request, allocate a buffer resource for the write data request, so as to temporarily store data carried in the write data request in the buffer 102. When the data stored in the cache 102 reaches a certain level, the data in the cache 102 is written into the hard disk 22. In addition, the host 20 may also send a read data request to the controller 11, after the controller 11 receives the read data request, it searches whether the data to be read is stored in the cache 102 according to the address in the read data request, if yes, directly sends the data to be read to the host 20, and if not, obtains the data from the hard disk 22 and sends the data to the host 20. In practical applications, the controller 11 and the hard disk 22 may be integrated in one storage device, or may be located in two devices that are independent of each other, and in the embodiment of the present application, any limitation is made on the positional relationship between the controller 11 and the hard disk 22. For convenience of description, in the present embodiment, the controller 11 and the hard disk 22 are collectively referred to as a storage device.
In practical application, the hard disk 22 of the storage device is logically combined, and a required RAID level is applied to obtain a RAID set. Since the RAID set has a large capacity by combining a plurality of hard disks, the available capacity of the RAID set is divided into smaller units, called volumes, which hide the organization and organization of the RAID set from the host 20. One volume may be allocated to one host 20 for use at the time of initialization, or may be allocated to a plurality of hosts 20 for use.
The structure of the controller 11 is described below, and as shown in fig. 1, the controller 11 includes at least a processor 101, a cache 102, and an interface (not shown in fig. 1). The processor 101 is a central processing unit (English: central processing unit, CPU). In an embodiment of the present application, the processor 101 may be configured to receive read data requests and write data requests from the host 20, process the read data requests and write data requests.
The cache 102 is used to temporarily store data received from the host 20 or data read from the hard disk 22. When the controller 11 receives a plurality of write data requests sent by the host, data in the plurality of write data requests may be temporarily stored in the cache 102. When the capacity of the cache 102 reaches a certain threshold, the data stored in the cache 102 is sent to the hard disk 22. The hard disk 22 stores the data. The cache 102 includes volatile memory, non-volatile memory, or a combination thereof. The volatile memory is, for example, a random-access memory (RAM). Nonvolatile memory includes various machine readable media that can store program codes, such as floppy disks, hard disks, solid State Disks (SSDs), optical disks, and the like. It will be appreciated that the cache 102 is required by the storage device in performing either write data requests or read data requests.
As shown in fig. 2, the storage space of the cache 102 includes a private space and a public space. Wherein the private space may be divided into several parts, each of which is allocated to one volume as a quota. For example, quota 1 is assigned to volume 1, quota 2 is assigned to volume 2, and quota n is assigned to volume n. The quota allocated for each volume may be the same or different. The controller 11 may allocate the private space to each volume on average or according to a certain principle when performing the initial setting. After the initial allocation, every other cycle, the processor 101 adjusts the allocated quota for each volume based on the front-end traffic pressure or the back-end data brushing capabilities to get a new quota. The periods refer to the same time interval, and in the embodiment of the present application, the management of the periods may be implemented by a timer. The new quota is relative to the old quota, specifically, the new quota refers to the quota allocated in the current period, and the old quota refers to the quota allocated in the historical period. The common space is not pre-allocated to any volume, which can be used by each volume according to its own liability information.
In one case, the processor 101 may adjust the quota per volume based on the front end traffic pressure. Each volume corresponds to a buffer queue in which one or more write data requests waiting to be processed are held. Allocating the quota for the volume according to the number of write data requests and the size of the write data requests contained in the buffer queue corresponding to each volume, wherein the more the number of write data requests contained in the buffer queue is, the more the quota is allocated for the corresponding volume of the buffer queue, the larger the size of the write data requests contained in the buffer queue is, and the more the quota is allocated for the corresponding volume of the buffer queue. Specifically, a product of the number of queued write data requests corresponding to each volume and a preset size of the write data requests may be calculated, where the larger the product, the more quota allocated. The cache queue is located in the cache 102.
In another case, the processor 101 may adjust the quota per volume based on the backend data brushing capabilities. Specifically, the quota may be allocated to a volume according to a number of cache concurrency of each volume, where the number of cache concurrency is used to indicate a number of write requests that can be written to the volume concurrently when data in the cache is copied to the hard disk. The write request herein refers to a request for writing data in the hard disk 22 from the cache 102, unlike a write data request received from the host for writing to the controller 11.
Referring to fig. 3, fig. 3 is a schematic diagram illustrating a method for allocating cache resources according to the present embodiment, which may be applied to the system shown in fig. 1 and executed by the processor 101. Specifically, the method comprises the following steps.
In S301, the processor 101 receives a write data request including an address of data to be written. The volume to which the data to be written is written may be determined from the address. For convenience of description, a volume to which data to be written is referred to as a target volume.
In S302, the processor 101 determines whether the liability information of the target volume is greater than, less than, or equal to 0. The liability information is used to indicate whether the quota allocated for the target volume has been overdrawn. If the debt information is greater than 0, indicating that other volumes owe some quota of the target volume; if the liability information is equal to 0, it indicates that the quota of the target volume is just sufficient, and that other volumes are not under the target volume. If the liability information is less than 0, the target volume is under some quota of other volumes.
Specifically, the liability information is equal to the value obtained by updating the reference quota after applying or releasing the cache resource in the current period. The reference quota is equal to a difference value obtained by subtracting an old quota from a new quota, the new quota is a quota allocated to the target volume in a current period, and the old quota is a quota allocated to the target volume in a historical period. The history period may be the previous period of the current period, or may be any one of the previous periods. For example, assuming that the new quota of the target volume is 2000 and the old quota of the previous cycle is 1000, the reference quota is equal to 1000. The reference quota being greater than 0 indicates that other volumes are now under quota of the target volume 1000. In the current period, if the processor 101 processes two data writing requests for the target volume and applies for 200 quota and 300 quota respectively, then the liability information at this time is equal to 1000-200-300=500. If the liability information of the target volume is greater than 0 in S302, S303 is performed; if the liability information of the target volume is equal to 0, S304 is performed; if the liability information of the target volume is less than 0, S303 is performed.
In S303, since the liability information is greater than 0, which indicates that the other volumes are under the target volume by some quota, then the buffer resource allocated for the write data request at this time may be allocated from the public space.
In S304, since the liability information is equal to 0, which indicates that the quota of the target volume is just enough, then the cache resource allocated for the write data request still uses the quota allocated for the target volume.
In S305, since the liability information is less than 0, which indicates that the target volume is owed to some quota by other volumes, the cache resource allocated for the write data request at this time still uses the quota allocated for the target volume.
According to the embodiment shown in fig. 3, the cached space includes a private space and a public space, the private space being divided into several parts, each part being allocated as a quota to one of the volumes. When a write data request is received, it is determined how to allocate cache resources for the write data request based on liability information of a target volume to be accessed. And if the debt information of the target volume is greater than zero, allocating cache resources for the data writing request from the public space. The debt information being greater than zero means that other volumes are still under some quota of the target volume, so that for the write data request, cache resources can be allocated for the write data request from a public space, the quota of the target volume is prevented from being further occupied to influence efficiency, and therefore processing efficiency of the write data request for the target volume is improved.
It will be appreciated that when data is deleted, the cache resources it occupies may be freed. The embodiment decides whether the released cache resource is added into the quota allocated for the volume or added into the public space according to the liability information of each volume. Referring to fig. 4, fig. 4 is a flowchart illustrating a buffer resource release process according to an embodiment of the application. As shown in fig. 4, the method comprises the following steps:
in S401, the processor 101 receives a data deletion request including an address of data. The volume to which the data to be written is written may be determined from the address. The target volume is still described here as an example.
S402 may refer to the description of S302 in fig. 3, and will not be described here.
In S403, since the liability information is greater than 0, which indicates that the other volume is under the target volume by some quota, then the released buffer resource should also be given to the target volume, and accordingly, the operation is to add the released buffer resource to the quota of the target volume.
In S404, since the liability information is equal to 0, which indicates that the quota of the target volume is just enough, then the cache resource allocated for the write data request at this time is still given to the target volume.
In S405, since the liability information is less than 0, it indicates that the target volume is under some quota for other volumes, the buffer resource allocated for the write data request at this time should be returned to other volumes, and accordingly, the operation thereof is to add the released buffer resource to the public space.
According to the embodiment shown in fig. 4, when the liability information is greater than 0, it is indicated that other volumes are under some quota of the target volume, and the released cache resources are added to the quota allocated to the target volume, so that the situation that the processing efficiency of the write data request for the target volume is compromised due to the fact that the quota of the other volumes under the target volume is more is avoided.
Referring to fig. 5, fig. 5 is a buffer resource allocation apparatus 50 according to an embodiment of the present application, where the apparatus 50 is located in a storage device and includes a receiving module 501, a judging module 502, and an allocating module 503.
A receiving module 501, configured to receive a write data request, where the write data request is used to write data to be written to a target volume, where the storage device includes a plurality of volumes and a cache, where a space of the cache includes a private space and a public space, where the private space is divided into a plurality of parts, each part is allocated as a quota to one of the volumes, and where the target volume is one of the volumes. The receiving module 501 may be implemented by the processor 101 invoking program code in the cache 102, and its specific implementation may refer to S301 shown in fig. 3.
A determining module 502, configured to determine whether debt information of the target volume is greater than zero, where the debt information is used to indicate whether a quota allocated to the target volume is overdraft. The judging module 502 may be implemented by the processor 101 invoking the program code in the cache 102, and its specific implementation may refer to S302 shown in fig. 3.
And the allocation module 503 is configured to allocate a buffer resource for the write data request from the public space if the liability information of the target volume is greater than zero. The allocation module 503 may be implemented by the processor 101 invoking program code in the cache 102, and its specific implementation may refer to S303 shown in fig. 3.
Optionally, the apparatus 50 may further include a release module 504 configured to add the released cache resource to the quota allocated for the target volume when the cache resource allocated for the write data request is released. The release module 504 may be implemented by the processor 101 invoking program code in the cache 102, the specific implementation of which may refer to S403 shown in fig. 4.
Those of ordinary skill in the art will appreciate that aspects of the application, or the possible implementations of aspects, may be embodied as a system, method or computer program product. Accordingly, aspects of the present application, or the possible implementations of aspects, may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, etc.) or an embodiment combining software and hardware aspects all generally referred to herein as a "circuit," module "or" system. Furthermore, aspects of the present application, or possible implementations of aspects, may take the form of a computer program product, which refers to computer-readable program code stored in a computer-readable medium.
The computer-readable medium includes, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing, such as Random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), optical disk.
A processor in a computer reads computer readable program code stored in a computer readable medium, such that the processor is capable of executing the functional actions specified in each step or combination of steps in the flowchart.
The computer readable program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. It should also be noted that in some alternative implementations, the functions noted in the flowchart steps or blocks in the block diagrams may occur out of the order noted in the figures. For example, two steps or blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Those of ordinary skill in the art may implement the described functionality using different approaches for each particular application, but such implementation is not considered to be beyond the scope of the present application.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and those skilled in the art can easily conceive of changes and substitutions within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. The method for allocating the cache resources is characterized by comprising the following steps:
receiving a data writing request, wherein the data writing request is used for writing data to be written into a target volume, a storage device comprises a plurality of volumes and a cache, a space of the cache comprises a private space and a public space, the private space is divided into a plurality of parts, each part is allocated to one of the volumes as a quota, and the target volume is one of the volumes;
judging whether the debt information of the target volume is larger than zero or not, wherein the debt information is used for indicating whether the quota distributed for the target volume is overdraft or not;
if the debt information of the target volume is greater than zero, cache resources are allocated for the data writing request from the public space;
when the cache resources allocated for the data writing request are released, the released cache resources are added to the quota allocated for the target volume.
2. The method of claim 1, wherein the liability information is equal to a value obtained by updating a reference quota after applying for a cache resource in a current period, or the liability information is equal to a value obtained by updating a reference quota after releasing the cache resource in the current period, the reference quota is equal to a difference obtained by subtracting an old quota from a new quota, the new quota is a quota allocated to the target volume in the current period, and the old quota is a quota allocated to the target volume in a history period.
3. The method of claim 1, wherein each volume corresponds to a buffer queue that holds one or pending write data requests, further comprising:
allocating the quota for the volume according to the number of write data requests and the size of the write data requests contained in the buffer queue corresponding to each volume, wherein the more the number of write data requests contained in the buffer queue is, the more the quota is allocated for the volume corresponding to the buffer queue, the larger the size of the write data requests contained in the buffer queue is, and the more the quota is allocated for the volume corresponding to the buffer queue.
4. The method as recited in claim 1, further comprising:
and allocating the quota to the volume according to the cache concurrency number of each volume, wherein the cache concurrency number is used for indicating the number of write requests capable of being written into the volume in a concurrency mode when the data in the cache are copied to the hard disk.
5. The method of claim 2, wherein the historical period is a period preceding the current period.
6. A cache resource allocation apparatus, wherein the apparatus is located in a storage device, and comprises:
a receiving module, configured to receive a write data request, where the write data request is used to write data to be written into a target volume, the storage device includes a plurality of volumes and a cache, a space of the cache includes a private space and a public space, the private space is divided into a plurality of parts, each part is allocated as a quota to one of the volumes, and the target volume is one of the volumes;
the judging module is used for judging whether the debt information of the target volume is larger than zero or not, and the debt information is used for indicating whether the quota distributed for the target volume is overdraft or not;
the allocation module is used for allocating cache resources for the data writing request from the public space if the liability information of the target volume is greater than zero;
and the release module is used for adding the released cache resources to the quota allocated for the target volume when the cache resources allocated for the data writing request are released.
7. The apparatus of claim 6, wherein the liability information is equal to a value obtained by updating a reference quota after applying for a cache resource in a current period, or wherein the liability information is equal to a value obtained by updating a reference quota after releasing the cache resource in the current period, the reference quota is equal to a difference obtained by subtracting an old quota from a new quota, the new quota being a quota allocated to the target volume in the current period, and the old quota being a quota allocated to the target volume in a history period.
8. The apparatus of claim 6, wherein each volume corresponds to a buffer queue in which one or pending write data requests are held,
the allocation module is further configured to allocate the quota for the volume according to the number of write data requests and the size of the write data requests included in the buffer queue corresponding to each volume, where the more the number of write data requests included in the buffer queue, the more the quota allocated for the volume corresponding to the buffer queue, the larger the size of the write data requests included in the buffer queue, and the more the quota allocated for the volume corresponding to the buffer queue.
9. The apparatus of claim 6, wherein the device comprises a plurality of sensors,
the allocation module is further configured to allocate the quota to the volume according to a cache concurrency number of each volume, where the cache concurrency number is used to indicate a number of write requests that can be written to the volume concurrently when data in the cache is copied to the hard disk.
10. The apparatus of claim 7, wherein the historical period is a period preceding the current period.
CN201711213099.3A 2017-11-28 2017-11-28 Cache resource allocation and device Active CN109840217B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711213099.3A CN109840217B (en) 2017-11-28 2017-11-28 Cache resource allocation and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711213099.3A CN109840217B (en) 2017-11-28 2017-11-28 Cache resource allocation and device

Publications (2)

Publication Number Publication Date
CN109840217A CN109840217A (en) 2019-06-04
CN109840217B true CN109840217B (en) 2023-10-20

Family

ID=66879503

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711213099.3A Active CN109840217B (en) 2017-11-28 2017-11-28 Cache resource allocation and device

Country Status (1)

Country Link
CN (1) CN109840217B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112306904B (en) * 2020-11-20 2022-03-29 新华三大数据技术有限公司 Cache data disk refreshing method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013023090A2 (en) * 2011-08-09 2013-02-14 Fusion-Io, Inc. Systems and methods for a file-level cache
CN103699496A (en) * 2012-09-27 2014-04-02 株式会社日立制作所 Hierarchy memory management

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10176098B2 (en) * 2014-11-17 2019-01-08 Hitachi, Ltd. Method and apparatus for data cache in converged system
US9733849B2 (en) * 2014-11-21 2017-08-15 Security First Corp. Gateway for cloud-based secure storage
US9684467B2 (en) * 2015-05-18 2017-06-20 Nimble Storage, Inc. Management of pinned storage in flash based on flash-to-disk capacity ratio
JP6540391B2 (en) * 2015-09-03 2019-07-10 富士通株式会社 Storage control device, storage control program, and storage control method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013023090A2 (en) * 2011-08-09 2013-02-14 Fusion-Io, Inc. Systems and methods for a file-level cache
CN103699496A (en) * 2012-09-27 2014-04-02 株式会社日立制作所 Hierarchy memory management

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
多核系统共享内存资源分配和管理研究;高珂等;《计算机学报》;20150531(第05期);全文 *

Also Published As

Publication number Publication date
CN109840217A (en) 2019-06-04

Similar Documents

Publication Publication Date Title
US9507720B2 (en) Block storage-based data processing methods, apparatus, and systems
CN109213696B (en) Method and apparatus for cache management
US11960749B2 (en) Data migration method, host, and solid state disk
CN108733316B (en) Method and manager for managing storage system
EP2378410A2 (en) Method and apparatus to manage tier information
CN108984104B (en) Method and apparatus for cache management
US9658773B2 (en) Management of extents for space efficient storage volumes by reusing previously allocated extents
CN106201652B (en) Data processing method and virtual machine
US9983826B2 (en) Data storage device deferred secure delete
US11755357B2 (en) Parameterized launch acceleration for compute instances
US20170262220A1 (en) Storage control device, method of controlling data migration and non-transitory computer-readable storage medium
US10209905B2 (en) Reusing storage blocks of a file system
CN112948279A (en) Method, apparatus and program product for managing access requests in a storage system
US8392653B2 (en) Methods and systems for releasing and re-allocating storage segments in a storage volume
CN109840217B (en) Cache resource allocation and device
WO2014126263A1 (en) Storage controlling device, storage controlling method, storage system and program
CN113535073B (en) Method for managing storage unit, electronic device and computer readable storage medium
US11099740B2 (en) Method, apparatus and computer program product for managing storage device
US10621096B2 (en) Read ahead management in a multi-stream workload
CN114442910A (en) Method, electronic device and computer program product for managing storage system
CN109739688B (en) Snapshot resource space management method and device and electronic equipment
CN104899158A (en) Memory access optimization method and memory access optimization device
EP3249540A1 (en) Method for writing multiple copies into storage device, and storage device
KR101549569B1 (en) Method for performing garbage collection and flash memory apparatus using the method
CN106776046B (en) SCST read-write optimization method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant