CN109840217A - A kind of cache resource allocation and device - Google Patents

A kind of cache resource allocation and device Download PDF

Info

Publication number
CN109840217A
CN109840217A CN201711213099.3A CN201711213099A CN109840217A CN 109840217 A CN109840217 A CN 109840217A CN 201711213099 A CN201711213099 A CN 201711213099A CN 109840217 A CN109840217 A CN 109840217A
Authority
CN
China
Prior art keywords
volume
quota
write data
target volume
caching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711213099.3A
Other languages
Chinese (zh)
Other versions
CN109840217B (en
Inventor
吴超
潘浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201711213099.3A priority Critical patent/CN109840217B/en
Publication of CN109840217A publication Critical patent/CN109840217A/en
Application granted granted Critical
Publication of CN109840217B publication Critical patent/CN109840217B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

A kind of cache resource allocation method and device, including receiving write data requests, write data request is for being written into data write-in target volume, the storage equipment includes multiple volumes and caching, the space of the caching includes private room and public space, the private room is divided into several parts, and as allocation of quota to a volume in the multiple volume, the target volume is a volume in the multiple volume for each part.Judge whether the debt information of the target volume is greater than zero, the debt information is used to indicate as whether the target volume allocated quota has overdrawed.It is write data request distribution cache resources from the public space if the debt information of the target volume is greater than zero.The efficiency of storage device processes business can be improved.

Description

A kind of cache resource allocation and device
Technical field
This application involves technical field of memory, especially a kind of cache resource allocation and device.
Background technique
Storage equipment can apply for cache resources when processing service requests.Common practice is that spatial cache is regarded as to one Cache resources are distributed from caching resource pool according to the principle first obtained is arrived first for service request in cache resources pond.However, resource is released It puts the business that slow business can be fast to resource rate of release to those and carries out certain compacting in resource level, influence to deposit Store up the efficiency of equipment processing business.
Summary of the invention
Cache resource allocation method and device provided by the present application, can be improved the efficiency of storage device processes business.
First aspect provides a kind of cache resource allocation method.In the method, processor receives write data requests, institute Write data requests are stated for being written into data write-in target volume, the storage equipment includes multiple volumes and caching, described to delay The space deposited includes private room and public space, and the private room is divided into several parts, and each part, which is used as, matches Volume distributes to a volume in the multiple volume, and the target volume is a volume in the multiple volume.Processor judges institute again Whether the debt information for stating target volume is greater than zero, and whether the debt information be used to indicate as the target volume allocated quota Overdraw.It is write data request distribution caching from the public space if the debt information of the target volume is greater than zero Resource.
According in a first aspect, the space of caching includes private room and public space, if the private room is divided into Dry part, each part is as allocation of quota to a volume in the multiple volume.When receiving write data requests, according to institute The debt information for the target volume to be accessed distributes cache resources to determine how for the write data requests.If the debt of the target volume Information of being engaged in is greater than zero, then is write data request distribution cache resources from the public space.Debt information is greater than zero Refer to that other volumes even owe the target volume some quotas, therefore write data are requested, then can be from public space It distributes cache resources, avoids the quota of target volume from further being occupied and be affected efficiency, to improve for the mesh Mark the treatment effeciency of the write data requests of volume.
With reference to first aspect, it in the first embodiment of first aspect, is requested when release is described for write data When the cache resources of distribution, the cache resources of the release are added in the target volume allocated quota by processor.It presses According to the first embodiment, when debt information is greater than 0, illustrates that other are rolled up and owe some quotas of target volume, the caching of release is provided Source is added in the target volume allocated quota, and the quota for avoiding other from rolling up deficient target volume is more and ties down for the mesh Mark the treatment effeciency of the write data requests of volume.
With reference to first aspect, in second of embodiment of first aspect, the debt information is equal in current period Numerical value after being updated after application or release cache resources to benchmark quota, the benchmark quota are equal to new quota and subtract old match The resulting difference of volume, it is the target volume allocated quota that the new quota, which is in current period, and the old quota is history week It is the target volume allocated quota in phase.
With reference to first aspect, in the third embodiment of first aspect, the corresponding buffering queue of each volume is described One or the write data requests to be processed such as are preserved in buffering queue.Processor buffering team according to corresponding to each volume The quantity for the write data requests for including in column and the size of write data requests are that volume distributes the quota, wherein buffering queue In include write data requests quantity it is more, buffering queue more for corresponding volume allocated quota described in the buffering queue In include write data requests size it is bigger, it is more for corresponding volume allocated quota described in the buffering queue.According to third Kind embodiment, is its allocated quotas according to the traffic pressure of each volume, and inclination of the cache resources to the big volume of traffic pressure mentions The high efficiency of storage device service business.
With reference to first aspect, in the 4th kind of embodiment of first aspect, processor is concurrent according to the caching of each volume Number distributes the quota for volume, and the caching number of concurrent is used to indicate when the data in caching copy to hard disk, can be concurrent The quantity of the write request of volume is written.According to the 4th kind of embodiment, matched according to the corresponding caching number of concurrent of each volume for it Volume, inclination of the cache resources to the high volume of concurrent capability improve the efficiency of storage device service business.
Second of embodiment with reference to first aspect, in the third embodiment of first aspect, the history week Phase is the previous period of the current early period.
The application second aspect provides a kind of cache resource allocation device, and described device is for executing first aspect and the On the one hand the method in any one implementation.
The application third aspect provides a kind of storage equipment, including processor and caching.The processor calls in caching Program code realize the method executed in first aspect and first aspect any one implementation.
The application fourth aspect provides a kind of computer program product, including storing the computer-readable of program code Storage medium, the instruction that said program code includes can be by least one method of above-mentioned first aspect.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be to needed in the embodiment attached Figure is briefly described.
Fig. 1 is system architecture diagram provided in an embodiment of the present invention;
Fig. 2 is spatial cache schematic diagram provided in an embodiment of the present invention;
Fig. 3 is the flow diagram of cache resource allocation method provided in an embodiment of the present invention;
Fig. 4 is the flow diagram of cache resources method for releasing provided in an embodiment of the present invention;
Fig. 5 is the structural schematic diagram of cache resource allocation device provided in an embodiment of the present invention.
Specific embodiment
Below in conjunction with attached drawing, technical scheme in the embodiment of the invention is clearly and completely described.
Fig. 1 is system architecture diagram provided in an embodiment of the present invention, and storage system provided in this embodiment includes host 20, control Device 11 and multiple hard disks 22 processed.Controller 11 can be a kind of calculating equipment, such as server, desktop computer.It is controlling Operating system and application program are installed on device 11.Controller by storage area network (storage area network, SAN it) is connect with host 20.Specifically, host 20 can send write data requests to controller 11, controller 11 receives described write Be that it distributes cache resources after request of data, by write data request in the data that carry be stored temporarily in caching 102 In.It again will be in the data write-in hard disk 22 in caching 102 when the data that caching 102 stores reach certain water level.In addition, host 20 can also send read data request to controller 11, after controller 11 receives the read data request, according to the reading According to data to be read whether are preserved in its caching 102 of the address search in request, if there is then directly will be described to be read Data be sent to host 20, the Data Concurrent is obtained from hard disk 22 if not and gives host 20.In practical applications, Controller 11 and hard disk 22 can integrate in a storage equipment, can also be located in mutually independent two equipment, this hair Bright embodiment does any restriction to the positional relationship of controller 11 and hard disk 22.For the convenience of description, in the present embodiment, will control Device 11 and hard disk 22 processed are referred to as storage equipment.
In practical applications, the hard disk 22 for storing equipment is subjected to logical combination, and the RAID level that application needs, obtained RAID set.RAID set is because by multiple hard disk combinations, and usual capacity is larger, so the active volume of RAID set is divided into smaller Unit, referred to as roll up, roll up the tissue and composition for concealing RAID set to host 20.One volume can distribute to one in initialization A host 20 uses, and can also distribute to the use of multiple main frames 20.
The structure of controller 11 is described below, as shown in Figure 1, controller 11 includes at least processor 101, caches 102 Hes Interface (not shown in figure 1).Processor 101 be a central processing unit (English: central processing unit, CPU).In embodiments of the present invention, processor 101 can be used for receiving the read data request from host 20 and write data and asks It asks, handle the read data request and write data requests.
Caching 102 is for temporarily storing from the received data of host 20 or the data read from hard disk 22.Controller 11 connects When receiving multiple write data requests that host is sent, the data in the multiple write data requests can be stored temporarily in caching In 102.When the capacity for caching 102 reaches certain threshold value, the data that caching 102 stores are sent to hard disk 22.Hard disk 22 is deposited Store up the data.Caching 102 includes volatile memory, nonvolatile memory or combinations thereof.Volatile memory is, for example, Random access storage device (English: random-access memory, RAM).Nonvolatile memory such as floppy disk, is consolidated hard disk The various machine readable medias that can store program code such as state hard disk (solid state disk, SSD), CD.It can manage Solution, storage equipment are required in execution write data requests or read data request using caching 102.
As shown in Fig. 2, the memory space of caching 102 includes private room and public space.Wherein private room can be drawn It is divided into several parts, each part is as allocation of quota to a volume.For example, quota 1 distributes to volume 1, quota 2 distributes to volume 2, quota n distribute to volume n.Each volume institute allocated quota may be the same or different.Controller 11 is carrying out initial setting up When, the private room can be averagely allocated to each volume, can also be distributed according to certain principle.After original allocation, Every a cycle, processor 101 can match each volume distribution according to front-end business pressure or Back end data brush ability Volume is adjusted, to obtain new quota.Period refers to identical time interval, in embodiments of the present invention can by timer Lai Realize the management in period.New quota is for old quota, specifically, new quota refers to that distributes in current period matches Volume, old quota refer to allocated quota in history cycle.Public space will not be pre-assigned to any volume, and each volume can basis The debt information of oneself uses the public space.
In a kind of situation, processor 101 can adjust the quota of each volume according to front-end business pressure.Each volume is corresponding One buffering queue preserves one or the write data requests to be processed such as in the buffering queue.According to each volume institute The quantity for the write data requests for including in corresponding buffering queue and the size of write data requests are that volume distributes the quota, In, the quantity for the write data requests for including in buffering queue is more, gets over for corresponding volume allocated quota described in the buffering queue More, the size for the write data requests for including in buffering queue is bigger, gets over for corresponding volume allocated quota described in the buffering queue It is more.Specifically, the quantity and preset write data requests of each volume corresponding write data requests being lined up can be calculated The product of size, product is bigger, and allocated quota is more.The buffer queue is located in caching 102.
In another case, processor 101 can adjust the quota of each volume according to Back end data brush ability.Specifically , it can be that volume distributes the quota according to the caching number of concurrent of each volume, the caching number of concurrent is used to indicate when in caching Data when copying to hard disk, the quantity of the write request of the volume can concurrently be written.Here write request refers to for from caching The request of data write-in hard disk 22 in 102, is different from from the received write data requests for writing controller 11 of host.
Fig. 3 is turned next to, Fig. 3 is cache resource allocation method provided in this embodiment, and this method can be applied in Fig. 1 Shown in system, executed by processor 101.Specifically, this approach includes the following steps.
In S301, processor 101 receives write data requests, and write data request includes the address of data to be written. The data to be written volume to be written can be determined according to the address.For the convenience of description, being written into data to be write The volume entered is known as target volume.
In S302, processor 101 judges whether the debt information of the target volume is greater than, is less than or equal to 0.It is described Debt information is used to indicate as whether the target volume allocated quota has overdrawed.If debt information is greater than 0, illustrate that other volumes are owed The some quotas of target volume;If debt information is equal to 0, illustrate that the quota of target volume is just enough, does not owe other volumes, other volumes The target volume is not owed.If debt information less than 0, illustrates that target volume owes other and rolls up some quotas.
Specifically, after debt information is equal to benchmark quota is updated after application in current period or release cache resources Numerical value.The benchmark quota is equal to new quota and subtracts the resulting difference of old quota, and it is institute that the new quota, which is in current period, Target volume allocated quota is stated, it is the target volume allocated quota that the old quota, which is in history cycle,.The history cycle It can be the previous period of current period, it can also any one in several periods forward again.For example, it is assumed that described The new quota of target volume is 2000, and the old quota in previous period is 1000, then benchmark quota is equal to 1000.Benchmark quota is big In 0, illustrate that other roll up the quota for owing the target volume 1000 at this time.In current period, if processor 101 has handled two needles To the write data requests of the target volume, SEPARATE APPLICATION 200 quotas and 300 quotas, then debt information at this time is equal to 1000-200-300=500.If the debt information of target volume is greater than 0 in S302, S303 is executed;If the debt of target volume Information is equal to 0, executes S304;If the debt information of target volume less than 0, executes S303.
In S303, since debt information is greater than 0, illustrates that other are rolled up and owe some quotas of target volume, then being at this time to write number It can be distributed from public space according to the cache resources of request distribution.
In S304, since debt information is equal to 0, illustrate that the quota of target volume is just enough, then being at this time to write data Requesting the cache resources of distribution still to use is the target volume allocated quota.
In S305, since debt information is less than 0, illustrate that target volume owes other and rolls up some quotas, therefore be at this time to write number Still using according to the cache resources of request distribution is the target volume allocated quota.
According to embodiment shown in Fig. 3, the space of caching includes private room and public space, the private room quilt Several parts are divided into, each part is as allocation of quota to a volume in the multiple volume.When reception write data requests When, it is determined how according to the debt information for the target volume to be accessed and distributes cache resources for the write data requests.If described The debt information of target volume is greater than zero, then is write data request distribution cache resources from the public space.Debt letter Breath is greater than zero and refers to that other volumes even owe the target volume some quotas, therefore write data are requested, then can be from public affairs Cache resources are distributed for it in cospace, avoid the quota of target volume from further being occupied and be affected efficiency, to improve For the treatment effeciency of the write data requests of the target volume.
It is understood that occupied cache resources can be released when data are deleted.The present embodiment according to The debt information of each volume determines that the cache resources after release are added to inside volume allocated quota, is also added to public sky Between in.Referring to FIG. 4, Fig. 4 is that the embodiment of the present invention provides the flow diagram of cache resources release.As shown in figure 4, include with Lower step:
In S401, processor 101 receives data removal request, and the data removal request includes the address of data.Root The data to be written volume to be written can be determined according to the address.Here it is still illustrated by taking target volume as an example.
S402 can refer to the description as described in S302 in Fig. 3, and which is not described herein again.
In S403, since debt information is greater than 0, illustrates that other are rolled up and owe some quotas of target volume, then discharge at this time Cache resources should just return the target volume, and correspondingly, operation is that the cache resources of release are added to matching for target volume In volume.
In S404, since debt information is equal to 0, illustrate that the quota of target volume is just enough, then being at this time to write data The cache resources of request distribution still return the target volume.
In S405, since debt information is less than 0, illustrate that target volume owes other and rolls up some quotas, therefore be at this time to write number Other volumes should be just returned according to the cache resources of request distribution, correspondingly, operation is that the cache resources of release are added to public affairs In cospace.
Embodiment according to Fig.4, illustrates that other roll up the deficient some quotas of target volume when debt information is greater than 0, will The cache resources of release are added in the target volume allocated quota, and the quota for avoiding other from rolling up deficient target volume is more and drags The treatment effeciency of the tired write data requests for the target volume.
Referring to FIG. 5, Fig. 5 is cache resource allocation device 50 provided in an embodiment of the present invention, which is located at storage In equipment, including receiving module 501, judgment module 502, distribution module 503.
Receiving module 501, for receiving write data requests, write data request is for being written into data write-in target In volume, the storage equipment includes multiple volumes and caching, and the space of the caching includes private room and public space, the private There is space to be divided into several parts, each part is as allocation of quota to a volume in the multiple volume, the target Volume is a volume in the multiple volume.Receiving module 501 can be called the program code in caching 102 real by processor 101 Existing, specific implementation can refer to S301 shown in Fig. 3.
Judgment module 502, for judging whether the debt information of the target volume is greater than zero, the debt information is for referring to It is shown as whether the target volume allocated quota has overdrawed.Judgment module 502 can be called in caching 102 by processor 101 Program code realizes that specific implementation can refer to S302 shown in Fig. 3.
Distribution module 503 is described from the public space if the debt information for the target volume is greater than zero Write data requests distribute cache resources.Distribution module 503 can be called the program code in caching 102 to realize by processor 101, Its specific implementation can refer to S303 shown in Fig. 3.
Optionally, device 50 can also include release module 504, for described for write data request distribution when discharging Cache resources when, the cache resources of the release are added in the target volume allocated quota.Release module 504 can To call the program code in caching 102 to realize that specific implementation can refer to S403 shown in Fig. 4 by processor 101.
It will be recognized by those of ordinary skill in the art that the possibility implementation of various aspects of the invention or various aspects It can be embodied as system, method or computer program product.Therefore, each aspect of the present invention or various aspects Possible implementation can use complete hardware embodiment, complete software embodiment (including firmware, resident software etc.), or The form of the embodiment of integration software and hardware aspect, collectively referred to herein as " circuit ", " module " or " system ".In addition, The possibility implementation of each aspect of the present invention or various aspects can use the form of computer program product, computer journey Sequence product refers to the computer readable program code of storage in computer-readable medium.
Computer-readable medium including but not limited to electronics, magnetism, optics, electromagnetism, infrared or semiconductor system, equipment or Person's device is above-mentioned any appropriately combined, as random access storage device (RAM), read-only memory (ROM), it is erasable can Program read-only memory (EPROM), CD.
Processor in computer reads the computer readable program code of storage in computer-readable medium, so that place Reason device is able to carry out function action specified in the combination of each step or each step in flow charts.
Computer readable program code can execute on the user's computer completely, part is held on the user's computer Row, as individual software package, part on the user's computer and part on the remote computer, or completely long-range It is executed on computer or server.It is also noted that in some alternative embodiments, each step or frame in flow charts Each piece of function of indicating may not be occurred by the sequence indicated in figure in figure.For example, depending on related function, show in succession Two steps or two blocks out may be actually executed substantially concurrently or these blocks sometimes may be by with opposite suitable Sequence executes.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually It is implemented in hardware or software, the specific application and design constraint depending on technical solution.The common skill in this field Art personnel can use different methods to achieve the described function each specific application, but this realization should not be recognized It is beyond the scope of this invention.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, ability Domain those of ordinary skill in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all cover in the present invention Protection scope within.Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (12)

1. a kind of cache resource allocation method characterized by comprising
Write data requests are received, write data request is for being written into data write-in target volume, the storage equipment packet Multiple volumes and caching are included, the space of the caching includes private room and public space, and the private room is divided into several A part, as allocation of quota to a volume in the multiple volume, the target volume is in the multiple volume for each part One volume;
Judge whether the debt information of the target volume is greater than zero, the debt information is used to indicate as target volume distribution Whether quota has overdrawed;
It is write data request distribution caching money from the public space if the debt information of the target volume is greater than zero Source.
2. the method according to claim 1, wherein further include:
When the release cache resources distributed for write data request, the cache resources of the release are added to institute It states in target volume allocated quota.
3. the method according to claim 1, wherein the debt information is equal to application or release in current period Numerical value after being updated after cache resources to benchmark quota, the benchmark quota are equal to new quota and subtract the resulting difference of old quota Value, it is the target volume allocated quota that the new quota, which is in current period, and it is described that the old quota, which is in history cycle, Target volume allocated quota.
4. the method according to claim 1, wherein each volume corresponds to a buffering queue, the buffering queue In preserve one or the write data requests to be processed such as, further includes:
Size according to the quantity for the write data requests for including in buffering queue corresponding to each volume and write data requests is Volume distributes the quota, wherein the quantity for the write data requests for including in buffering queue is more, is corresponding described in the buffering queue Volume allocated quota it is more, the size for the write data requests for including in buffering queue is bigger, is corresponding described in the buffering queue Volume allocated quota it is more.
5. the method according to claim 1, wherein further include:
It is that volume distributes the quota according to the caching number of concurrent of each volume, the caching number of concurrent is used to indicate when the number in caching When according to copying to hard disk, the quantity of the write request of volume can concurrently be written.
6. according to the method described in claim 3, it is characterized in that, the history cycle is the previous week of the current early period Phase.
7. a kind of cache resource allocation device, which is characterized in that described device is located in storage equipment, comprising:
Receiving module, for receiving write data requests, write data request is for being written into data write-in target volume, institute Stating storage equipment includes multiple volumes and caching, and the space of the caching includes private room and public space, the private room Several parts are divided into, as allocation of quota to a volume in the multiple volume, the target volume is institute for each part State a volume in multiple volumes;
Judgment module, for judging whether the debt information of the target volume is greater than zero, the debt information is used to indicate as institute State whether target volume allocated quota has overdrawed;
Distribution module is write data from the public space if the debt information for the target volume is greater than zero Request distribution cache resources.
8. device according to claim 7, which is characterized in that further include:
Release module, for when the release cache resources for write data request distribution, by the caching of the release Resource is added in the target volume allocated quota.
9. device according to claim 7, which is characterized in that the debt information is equal to application or release in current period Numerical value after being updated after cache resources to benchmark quota, the benchmark quota are equal to new quota and subtract the resulting difference of old quota Value, it is the target volume allocated quota that the new quota, which is in current period, and it is described that the old quota, which is in history cycle, Target volume allocated quota.
10. device according to claim 7, which is characterized in that the corresponding buffering queue of each volume, the buffering queue In preserve one or the write data requests to be processed such as,
The distribution module, the quantity for the write data requests for being also used in the buffering queue according to corresponding to each volume include and The size of write data requests is that volume distributes the quota, wherein the quantity for the write data requests for including in buffering queue is more, is Corresponding volume allocated quota described in the buffering queue is more, and the size for the write data requests for including in buffering queue is bigger, is Corresponding volume allocated quota described in the buffering queue is more.
11. device according to claim 7, which is characterized in that
The distribution module is also used to distribute the quota, the caching number of concurrent according to the caching number of concurrent of each volume for volume It is used to indicate the quantity that the write request of volume can concurrently be written when the data in caching copy to hard disk.
12. device according to claim 9, which is characterized in that the history cycle is the previous of the current early period Period.
CN201711213099.3A 2017-11-28 2017-11-28 Cache resource allocation and device Active CN109840217B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711213099.3A CN109840217B (en) 2017-11-28 2017-11-28 Cache resource allocation and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711213099.3A CN109840217B (en) 2017-11-28 2017-11-28 Cache resource allocation and device

Publications (2)

Publication Number Publication Date
CN109840217A true CN109840217A (en) 2019-06-04
CN109840217B CN109840217B (en) 2023-10-20

Family

ID=66879503

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711213099.3A Active CN109840217B (en) 2017-11-28 2017-11-28 Cache resource allocation and device

Country Status (1)

Country Link
CN (1) CN109840217B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112306904A (en) * 2020-11-20 2021-02-02 新华三大数据技术有限公司 Cache data disk refreshing method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013023090A2 (en) * 2011-08-09 2013-02-14 Fusion-Io, Inc. Systems and methods for a file-level cache
CN103699496A (en) * 2012-09-27 2014-04-02 株式会社日立制作所 Hierarchy memory management
US20160150047A1 (en) * 2014-11-21 2016-05-26 Security First Corp. Gateway for cloud-based secure storage
US20170068618A1 (en) * 2015-09-03 2017-03-09 Fujitsu Limited Storage controlling apparatus, computer-readable recording medium having storage controlling program stored therein, and storage controlling method
US20170147491A1 (en) * 2014-11-17 2017-05-25 Hitachi, Ltd. Method and apparatus for data cache in converged system
US20170285995A1 (en) * 2015-05-18 2017-10-05 Nimble Storage, Inc. Updating of pinned storage in flash based on changes to flash-to-disk capacity ratio

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013023090A2 (en) * 2011-08-09 2013-02-14 Fusion-Io, Inc. Systems and methods for a file-level cache
CN103699496A (en) * 2012-09-27 2014-04-02 株式会社日立制作所 Hierarchy memory management
US20170147491A1 (en) * 2014-11-17 2017-05-25 Hitachi, Ltd. Method and apparatus for data cache in converged system
US20160150047A1 (en) * 2014-11-21 2016-05-26 Security First Corp. Gateway for cloud-based secure storage
US20170285995A1 (en) * 2015-05-18 2017-10-05 Nimble Storage, Inc. Updating of pinned storage in flash based on changes to flash-to-disk capacity ratio
US20170068618A1 (en) * 2015-09-03 2017-03-09 Fujitsu Limited Storage controlling apparatus, computer-readable recording medium having storage controlling program stored therein, and storage controlling method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
高珂等: "多核系统共享内存资源分配和管理研究", 《计算机学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112306904A (en) * 2020-11-20 2021-02-02 新华三大数据技术有限公司 Cache data disk refreshing method and device
CN112306904B (en) * 2020-11-20 2022-03-29 新华三大数据技术有限公司 Cache data disk refreshing method and device

Also Published As

Publication number Publication date
CN109840217B (en) 2023-10-20

Similar Documents

Publication Publication Date Title
US8805902B2 (en) Managing snapshot storage pools
US9671960B2 (en) Rate matching technique for balancing segment cleaning and I/O workload
US9639459B2 (en) I/O latency and IOPs performance in thin provisioned volumes
US9639469B2 (en) Coherency controller with reduced data buffer
US20150215656A1 (en) Content management apparatus and method, and storage medium
JP6540391B2 (en) Storage control device, storage control program, and storage control method
US8479046B1 (en) Systems, methods, and computer readable media for tracking pool storage space reservations
US8484424B2 (en) Storage system, control program and storage system control method
TW201220060A (en) Latency reduction associated with a response to a request in a storage system
JP2005031929A (en) Management server for assigning storage area to server, storage device system, and program
US10705876B2 (en) Operation management system and operation management method
US9792050B2 (en) Distributed caching systems and methods
JP2007316725A (en) Storage area management method and management computer
US10176098B2 (en) Method and apparatus for data cache in converged system
JP2007102762A (en) Resource management method in logically partition divisioned storage system
US9104317B2 (en) Computer system and method of controlling I/O with respect to storage apparatus
CN105302489B (en) A kind of remote embedded accumulator system of heterogeneous polynuclear and method
CN109840217A (en) A kind of cache resource allocation and device
US8984235B2 (en) Storage apparatus and control method for storage apparatus
US20210349756A1 (en) Weighted resource cost matrix scheduler
US10621096B2 (en) Read ahead management in a multi-stream workload
KR102220468B1 (en) Preemptive cache post-recording with transaction support
JP2018181207A (en) Device, method, and program for storage control
CN110199265B (en) Storage device and storage area management method
JP2003044328A (en) Disk subsystem and its storage control method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant