CN102262512A - System, device and method for realizing disk array cache partition management - Google Patents

System, device and method for realizing disk array cache partition management Download PDF

Info

Publication number
CN102262512A
CN102262512A CN2011102056281A CN201110205628A CN102262512A CN 102262512 A CN102262512 A CN 102262512A CN 2011102056281 A CN2011102056281 A CN 2011102056281A CN 201110205628 A CN201110205628 A CN 201110205628A CN 102262512 A CN102262512 A CN 102262512A
Authority
CN
China
Prior art keywords
data
cache
partitions
cache partitions
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2011102056281A
Other languages
Chinese (zh)
Inventor
吕烁
文中领
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Beijing Electronic Information Industry Co Ltd
Original Assignee
Inspur Beijing Electronic Information Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Beijing Electronic Information Industry Co Ltd filed Critical Inspur Beijing Electronic Information Industry Co Ltd
Priority to CN2011102056281A priority Critical patent/CN102262512A/en
Publication of CN102262512A publication Critical patent/CN102262512A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention provides a system, a device and a method for realizing disk array cache partition management. The system comprises one or more application service modules, wherein the application service modules send a request for reading and writing data to a caching pool management device, and receive the data returned from the caching pool management device; the caching pool management device performs partition management on the caching space in a caching pool, and caching partitions are set for application services; according to the request for reading and writing the data of the application service modules, the data written into a back end storage device corresponding to the application services is read through the allocated caching partitions; and the back end storage device stores the data corresponding to the application services. By using the system, the device and the method, caching resources can be allocated to the application services for which the caching resources are most needed, so the application performance of the caching resources is optimized, and simultaneously resource competitions among the application services are effectively reduced.

Description

A kind of system, device and method of realizing the magnetic disk array buffer storage partition management
Technical field
The present invention relates to the cache partitions technology of disk array, relate in particular to system, device and method at the magnetic disk array buffer storage partition management of specific application.
Background technology
To the demand of high capacity storage, impelled the birth of raid-array (RAID, Redundant Array of Independent Disks) technology, and formed corresponding disk array product in actual applications.
The RAID technology is the multi-section hard disk to be formed the jumbo hard disk of virtual separate unit by RAID controller (being realized by hardware or software) use, and is characterized in accelerating the speed that the multi-section hard disk is read simultaneously and the fault-tolerance of raising disk.
Along with popularizing and the volatile development of information capacity requirements of internet, the demand of disk array enlarges day by day, storage system needs to provide service to multiple different application simultaneously as a kind of shared resource, for example, require storage system to have different load characteristics and performance requirement at these the dissimilar application of database server, file server and video server.
How cache resources is distributed to the application that needs most, make its best performanceization, and make the resource contention between using reduce effectively, just become the buffer problem demanding prompt solution.
Summary of the invention
Technical matters to be solved by this invention provides a kind of system, device and method of realizing the magnetic disk array buffer storage partition management, can make that the resource contention between using reduces effectively.
In order to solve the problems of the technologies described above, the invention provides a kind of system that realizes the magnetic disk array buffer storage partition management, comprise the one or more application services module, cache pool management devices and the rear end memory device that connect successively, wherein:
Application services module is used for sending the request of reading and writing data to the cache pool management devices, and receives the data that the cache pool management devices returns;
The cache pool management devices is used for the spatial cache of partition management cache pool, at applied business cache partitions is set; According to the request of reading and writing data of application services module, the corresponding data of applied business read by the cache partitions of distributing in the memory device of rear end with writing;
The rear end memory device is used to store the corresponding data of applied business.
Further, application services module comprises applied business IO thread, and the cache pool management devices comprises cache partitions module, buffer memory distribution module and the cache data access module that connects successively, and the rear end memory device comprises the rear end disk array, wherein:
Applied business IO thread is used for sending the request of reading and writing data to the cache data access module; The data that the cache data access module is returned offer the application corresponding business;
The cache partitions module is used for being set to corresponding cache partitions at the spatial cache of applied business cache pool, comprises the capacity of unit data piece in cache partitions total volume and the cache partitions;
The buffer memory distribution module is used for the lookup result according to input, distributes the cache partitions with one or more data blocks, and reads instruction to cache data access module output data read write command or data;
The cache data access module is used for searching corresponding cache partitions state effective data block according to the request of reading and writing data that receives, and lookup result is exported to the buffer memory distribution module; Reading and writing data instruction or data according to input read instruction, and will write the cache partitions of distribution from the data that the rear end disk array reads, and/or the data that will read from the respective cache subregion return to applied business IO thread.
Further,
The buffer memory distribution module is a data hit according to described lookup result, then be updated to the cache partitions data mode useful, and the request of reading and writing data is from rear end disk reading of data, be miss of data perhaps according to lookup result, then distribute cache partitions, and to cache data access module output data read write command; Be data hit perhaps according to lookup result, and read and write data the request be from the cache partitions reading of data, then read instruction to cache data access module output data;
Instruction will write the cache partitions of distribution from the data that the rear end disk array reads to the cache data access module according to reading and writing data, and the data that will read from this cache partitions return to applied business IO thread; Perhaps read instruction, the data that directly read from this cache partitions are returned to applied business IO thread according to data.
Further, the cache pool management devices also comprises the buffer memory recycling module that is connected with the cache partitions module, wherein:
The cache partitions module is at set intervals if the empty number of data blocks in the query caching pond is lower than low limit value, then to buffer memory recycling module output buffers recovery command;
The buffer memory recycling module is used for according to the buffer memory recovery command cache partitions state being in useless data block and is recovered in the cache pool, and the data block state that reclaims is updated to sky.
Further,
The buffer memory recycling module starts corresponding cache partitions by the system recoveries thread according to the buffer memory take-back strategy and reclaims thread, and cache partitions reclaims thread and calls under the startup of system recoveries thread and reclaim algorithm and reclaim this cache partitions internal state and be in useless data block;
The buffer memory take-back strategy comprises take-back strategy according to priority and satisfies any one or two kinds in the lower bound data block take-back strategy by cache partitions.
In order to solve the problems of the technologies described above, the invention provides a kind of cache pool management devices of realizing the magnetic disk array buffer storage subregion, comprise the cache partitions module, buffer memory distribution module and the cache data access module that connect successively, wherein:
The cache partitions module is used for being set to corresponding cache partitions at the spatial cache of applied business cache pool;
The buffer memory distribution module is used for distributing cache partitions according to the lookup result of input, and reads instruction to cache data access module output data read write command or data;
The cache data access module is used for the request of reading and writing data of sending according to application services module, searches state effective data block in the corresponding cache partitions, and lookup result is exported to the buffer memory distribution module; Reading and writing data instruction or data according to input read instruction, and will write the cache partitions of distribution from the data that the rear end memory device reads, and/or the data that will read from the respective cache subregion return to application services module.
Further,
The cache partitions that the cache partitions module is provided with comprises the capacity of unit data piece in cache partitions total volume and the cache partitions;
The buffer memory distribution module is a data hit according to described lookup result, then the data mode with cache partitions is updated to useful, and if the request of reading and writing data is from rear end disk reading of data, be miss of data perhaps according to lookup result, then distribute cache partitions with one or more data blocks, and to cache data access module output data read write command; Be data hit perhaps according to lookup result, and read and write data the request be from the cache partitions reading of data, then read instruction to cache data access module output data;
Instruction will write the cache partitions of distribution from the data that the rear end memory device reads to the cache data access module according to reading and writing data, and the data that will read from this cache partitions return to application services module; Perhaps read instruction, the data that directly read from this cache partitions are returned to application services module according to data.
Further, the cache pool management devices also comprises the buffer memory recycling module that is connected with the cache partitions module, wherein:
The cache partitions module is at set intervals if the empty number of data blocks in the query caching pond is lower than low limit value, then to buffer memory recycling module output buffers recovery command;
The buffer memory recycling module is used for according to buffer memory recovery command and buffer memory take-back strategy the cache partitions state being in useless data block and is recovered in the cache pool, and the data block state that reclaims is updated to sky; The buffer memory take-back strategy comprises according to priority take-back strategy and satisfies in the lower bound data block take-back strategy arbitrarily one or both by cache partitions.
In order to solve the problems of the technologies described above, the invention provides a kind of method that realizes the magnetic disk array buffer storage partition management, comprising:
The cache pool management devices carries out partition management to the spatial cache in the cache pool, at applied business cache partitions is set.
Further, this method also comprises:
Application services module sends the request of reading and writing data to the cache pool management devices;
The cache pool management devices is according to the request of reading and writing data that receives, and the corresponding data of applied business read by the cache partitions of distributing in the memory device of rear end with writing.
Further,
The cache pool management devices comprises the capacity of unit data piece in cache partitions total volume and the cache partitions at the cache partitions that applied business is provided with.
Further,
Application services module comprises from rear end memory device reading of data with from the cache partitions reading of data to the request of reading and writing data that described cache pool management devices sends;
The cache pool management devices is according to the request of reading and writing data that receives, and the corresponding data of applied business read by the cache partitions of distributing in the memory device of rear end with writing, and specifically comprise:
Search state effective data block in the corresponding cache partitions according to the request of reading and writing data that receives, distribute cache partitions with one or more data blocks according to lookup result, and will write the cache partitions of distribution from the data that the rear end memory device reads, and/or the data that will read from the respective cache subregion return to application services module.
Further, the cache pool management devices distributes the cache partitions with one or more data blocks according to lookup result, and will write the cache partitions of distribution from the data that the rear end memory device reads, and/or the data that will read from the respective cache subregion return to application services module, specifically comprise:
According to lookup result is data hit, then the data mode with cache partitions is updated to useful, and if the request of reading and writing data is from rear end disk reading of data, be miss of data perhaps according to lookup result, then distribute cache partitions with one or more data blocks, to write the cache partitions of distribution from the data that the rear end memory device reads, and the data that will read return to application services module from this cache partitions;
Be data hit according to lookup result perhaps, and the request of reading and writing data is from the cache partitions reading of data, then the data that will directly read from this cache partitions return to application services module.
Further, this method also comprises:
The cache pool management devices is at set intervals if the empty number of data blocks in the query caching pond is lower than low limit value, then state in the cache partitions is in useless data block and is recovered in the cache pool, and the data block state that reclaims is updated to sky according to the buffer memory take-back strategy; The buffer memory take-back strategy comprises according to priority take-back strategy and satisfies lower bound data block take-back strategy by cache partitions.
The present invention is owing to can dynamically be divided into buffer memory a plurality of zones at specific applied business, data block in the cache partitions can automatically be allocated between a plurality of cache partitions, therefore cache resources can be distributed to the applied business that needs most, make and the application performance optimization of cache resources make that simultaneously the resource contention between the applied business reduces effectively.
Description of drawings
Fig. 1 is the synoptic diagram that the buffer memory that the present invention is directed to the rear end disk array carries out subregion:
Fig. 2 is the system embodiment of realization magnetic disk array buffer storage partition management of the present invention and the structural representation of device embodiment;
Fig. 3 is the method embodiment process flow diagram of realization magnetic disk array buffer storage partition management of the present invention.
Embodiment
Below in conjunction with accompanying drawing and preferential embodiment technical scheme of the present invention is at length set forth.The embodiment that below exemplifies only is used for description and interpretation the present invention, and does not constitute the restriction to technical solution of the present invention.
The present invention is directed to disk array a kind of buffer memory dynamic partition method is provided, it carries out partition management based on the various servers in the types of applications business (comprise in database server, file server and the multimedia server one or more) to cache resources, as shown in Figure 1.
Because each uses the logical unit number (LUN of the disk array that all is based upon the rear end disk, therefore Logical Unit Number) on, the present invention is directed to each LUN data block size in the total volume of the priority of applied business, corresponding cache partitions and the cache partitions is set.Each independently cache partitions can select suitable data block size is set according to the load characteristic of applied business and server thereof, improve the utilization factor of cache resources and the hit rate of data with this; For example, at database server the big or small 4K of being of each data block in the corresponding cache partitions is set, at file server the big or small 16k of being of each data block in the corresponding cache partitions is set, at video server each data block size is set in the corresponding cache partitions and is 64K, this searches and is quite high to the efficiency of managing of metadata in the buffer memory for data cached; Simultaneously select corresponding stripe size, optimize the operation that writes disk at the pairing disk array of each cache partitions.
The present invention also reclaims cache resources at cache partitions by configurable buffer memory take-back strategy, comprise according to priority take-back strategy, by cache partitions satisfy lower bound data block take-back strategy and by .... take-back strategy, thereby realize buffer memory Differentiated Services at the different application level traffic, applied business to key can distribute more cache resources, has sufficient cache resources to use to guarantee crucial business.Simultaneously, the present invention will reclaim the buffer memory distribution mechanism that spatial cache combines according to the demand assignment cache partitions of applied business with by the buffer memory take-back strategy, make and use different buffer memorys (being data block size in cache partitions capacity and the cache partitions) to become possibility at different applied business.Here according to priority take-back strategy is meant from reclaim different cache resources by the priority of applied business different cache partitions, for example reclaims for the low cache resources of applied business priority and wants priority reclamation.According to priority reclaim cache policy, can carry out dynamic adjustments each cache partitions.
The system embodiment of realization magnetic disk array buffer storage partition management provided by the invention, its structure comprise the one or more application services module, cache pool management devices and the rear end memory device that connect successively as shown in Figure 2, wherein:
Application services module is used for sending the request of reading and writing data to the cache pool management devices, and receives the data that the cache pool management devices returns;
The cache pool management devices is used for the spatial cache of partition management cache pool, at applied business cache partitions is set; According to the request of reading and writing data of application services module, the corresponding data of applied business read by the cache partitions of distributing in the memory device of rear end with writing;
The rear end memory device is used to store the corresponding data of application services module.
Cache pool management devices shown in Figure 2 further comprises cache partitions module, buffer memory distribution module and the cache data access module that connects successively, application services module comprises applied business IO thread (not shown among Fig. 2), the rear end memory device comprises the rear end disk array, wherein:
Applied business IO thread is used for sending the request of reading and writing data to the cache data access module; The data that the cache data access module is returned offer the application corresponding business;
The cache partitions module is used for being set to corresponding cache partitions at the spatial cache of applied business cache pool, comprises the capacity of unit data piece in cache partitions total volume and the cache partitions;
The buffer memory distribution module is used for the lookup result according to input, and distribute data piece number is the cache partitions of N, and reads instruction to cache data access module output data read write command or data, and N is the integer more than or equal to 1;
The cache data access module, corresponding cache partitions state effective data block is searched in the request of reading and writing data that is used for receiving, and lookup result is exported to the buffer memory distribution module; Reading and writing data instruction or data according to input read instruction, and will write the cache partitions of distribution from the data that the rear end disk array reads, and/or the data that will read from the respective cache subregion return to applied business IO thread.
Among the said system embodiment,
The buffer memory distribution module is a data hit according to lookup result, then be updated to the cache partitions data mode useful, and be that request is from rear end disk reading of data, be miss of data perhaps according to lookup result, then distribute data piece number is the cache partitions of N, to cache data access module output data read write command; Be data hit perhaps, and be to ask then to read instruction to cache data access module output data from the cache partitions reading of data according to lookup result;
Instruction will write the cache partitions of distribution from the data that the rear end disk reads to the cache data access module according to reading and writing data, and the data that will read from this cache partitions return to applied business IO thread; Perhaps read instruction, the data that directly read from this cache partitions are returned to applied business IO thread according to data.
In said system embodiment, the cache pool management devices also comprises the buffer memory recycling module that is connected with the cache partitions module, wherein:
The cache partitions module is at set intervals if the empty number of data blocks in the query caching pond is lower than low limit value, then to buffer memory recycling module output buffers recovery command;
The buffer memory recycling module is used for according to the buffer memory recovery command cache partitions state being in useless data block and is recovered in the cache pool, and the data block state that reclaims is updated to sky.
The buffer memory recycling module starts corresponding cache partitions by the system recoveries thread according to the buffer memory take-back strategy and reclaims thread, and cache partitions recovery thread reclaims the local area internal state and is in useless data block under the startup of system recoveries thread.
The buffer memory take-back strategy comprises according to priority take-back strategy and satisfies lower bound data block take-back strategy by cache partitions.
The present invention is directed to said system embodiment, the method embodiment that realizes the magnetic disk array buffer storage partition management correspondingly also is provided, its flow process comprises the steps: as shown in Figure 3
101: applied business sends the request of reading and writing data by the IO thread;
102, the 103:IO thread is searched the state effective data block in cache partitions, if searching to hit then carries out the following step, miss as if searching, then execution in step 112;
104: call cache replacement algorithm the state of buffer memory is updated to the useful state of data;
105: judge whether the request of reading and writing data is from rear end disk read data, is then to carry out the following step, otherwise execution in step 109;
106: judging whether cache pool hollow number of data blocks is lower than low limit value, is execution in step 110 then, otherwise carries out the following step;
107: distribute data piece number is the cache partitions of N;
108: call the bottom read-write interface data blocks stored on the disk of rear end is read out in the cache partitions of distribution;
109: will return to applied business by the IO thread, process ends from the data that cache partitions reads;
110: start cache partitions by the system recoveries thread according to take-back strategy and reclaim spatial cache;
111: cache partitions reclaims the caching page that the algorithm recovery is of little use by reclaiming thread dispatching, returns step 106 and carries out;
112: distribute data piece number is the cache partitions of N;
113: judge whether cache pool hollow number of data blocks is lower than low limit value, is then to carry out the following step, otherwise execution in step 108;
114: reclaim spatial cache, execution in step 108.
Promptly start cache partitions by the system recoveries thread according to take-back strategy, cache partitions reclaims the caching page that is of little use by reclaiming thread.
For those skilled in the art; after having understood content of the present invention and principle; can be under the situation that does not deviate from the principle and scope of the present invention; the method according to this invention is carried out various corrections and the change on form and the details, but these are based on correction of the present invention with change still within claim protection domain of the present invention.

Claims (14)

1. a system that realizes the magnetic disk array buffer storage partition management comprises the one or more application services module, cache pool management devices and the rear end memory device that connect successively, wherein:
Application services module is used for sending the request of reading and writing data to the cache pool management devices, and receives the data that the cache pool management devices returns;
The cache pool management devices is used for the spatial cache of partition management cache pool, at applied business cache partitions is set; According to the request of reading and writing data of application services module, the corresponding data of applied business read by the cache partitions of distributing in the memory device of rear end with writing;
The rear end memory device is used to store the corresponding data of applied business.
2. according to the described system of claim 1, it is characterized in that, described application services module comprises applied business IO thread, described cache pool management devices comprises cache partitions module, buffer memory distribution module and the cache data access module that connects successively, described rear end memory device comprises the rear end disk array, wherein:
Applied business IO thread is used for sending the described request of reading and writing data to the cache data access module; The data that the cache data access module is returned offer the application corresponding business;
The cache partitions module is used for being set to corresponding cache partitions at the spatial cache of the described cache pool of applied business, comprises the capacity of unit data piece in cache partitions total volume and the cache partitions;
The buffer memory distribution module is used for the lookup result according to input, distributes the cache partitions with one or more data blocks, and reads instruction to cache data access module output data read write command or data;
The cache data access module is used for searching corresponding cache partitions state effective data block according to the described request of reading and writing data that receives, and described lookup result is exported to the buffer memory distribution module; Reading and writing data instruction or data according to input read instruction, and will write the cache partitions of distribution from the data that the rear end disk array reads, and/or the data that will read from the respective cache subregion return to applied business IO thread.
3. according to the described system of claim 2, it is characterized in that,
Described buffer memory distribution module is a data hit according to described lookup result, then be updated to the cache partitions data mode useful, and the described request of reading and writing data is from rear end disk reading of data, be miss of data perhaps according to described lookup result, then distribute described cache partitions, and export described reading and writing data instruction to described cache data access module; Be data hit perhaps according to described lookup result, and described read and write data the request be from described cache partitions reading of data, then export described data and read instruction to described cache data access module;
Instruction will write the described cache partitions of distribution from the data that described rear end disk array reads to described cache data access module according to reading and writing data, and the data that will read from this cache partitions return to described applied business IO thread; Perhaps read instruction, the data that directly read from this cache partitions are returned to described applied business IO thread according to described data.
4. according to claim 2 or 3 described systems, it is characterized in that described cache pool management devices also comprises the buffer memory recycling module that is connected with described cache partitions module, wherein:
Described cache partitions module is at set intervals if the empty number of data blocks in the query caching pond is lower than low limit value, then to buffer memory recycling module output buffers recovery command;
The buffer memory recycling module is used for according to described buffer memory recovery command the cache partitions state being in useless data block and is recovered in the described cache pool, and the data block state that reclaims is updated to sky.
5. according to the described system of claim 4, it is characterized in that,
Described buffer memory recycling module starts corresponding cache partitions by the system recoveries thread according to the buffer memory take-back strategy and reclaims thread, and described cache partitions reclaims thread and calls under the startup of system recoveries thread and reclaim algorithm and reclaim that described state is in useless data block in this cache partitions;
Described buffer memory take-back strategy comprises take-back strategy according to priority and satisfies any one or two kinds in the lower bound data block take-back strategy by cache partitions.
6. a cache pool management devices of realizing the magnetic disk array buffer storage subregion is characterized in that, comprises the cache partitions module, buffer memory distribution module and the cache data access module that connect successively, wherein:
The cache partitions module is used for being set to corresponding cache partitions at the spatial cache of applied business cache pool;
The buffer memory distribution module is used for distributing cache partitions according to the lookup result of input, and reads instruction to cache data access module output data read write command or data;
The cache data access module is used for the request of reading and writing data of sending according to application services module, searches state effective data block in the corresponding cache partitions, and described lookup result is exported to the buffer memory distribution module; Reading and writing data instruction or data according to input read instruction, and will write the cache partitions of distribution from the data that the rear end memory device reads, and/or the data that will read from the respective cache subregion return to application services module.
7. according to the described cache pool management devices of claim 6, it is characterized in that,
The cache partitions that the cache partitions module is provided with comprises the capacity of unit data piece in cache partitions total volume and the cache partitions;
The buffer memory distribution module is a data hit according to described lookup result, then the data mode with described cache partitions is updated to useful, and if the described request of reading and writing data is from rear end disk reading of data, be miss of data perhaps according to described lookup result, then distribute described cache partitions, and export described reading and writing data instruction to described cache data access module with one or more data blocks; Be data hit perhaps according to described lookup result, and described read and write data the request be from described cache partitions reading of data, then export described data and read instruction to described cache data access module;
Described cache data access module will write the cache partitions of distribution from the data that described rear end memory device reads according to described reading and writing data instruction, and the data that will read from this cache partitions return to described application services module; Perhaps read instruction, the data that directly read from this cache partitions are returned to described application services module according to described data.
8. according to claim 6 or 7 described cache pool management devices, it is characterized in that described cache pool management devices also comprises the buffer memory recycling module that is connected with described cache partitions module, wherein:
Described cache partitions module is at set intervals if the empty number of data blocks in the query caching pond is lower than low limit value, then to buffer memory recycling module output buffers recovery command;
The buffer memory recycling module is used for according to described buffer memory recovery command and buffer memory take-back strategy the cache partitions state being in useless data block and is recovered in the described cache pool, and the data block state that reclaims is updated to sky; Described buffer memory take-back strategy comprises according to priority take-back strategy and satisfies in the lower bound data block take-back strategy arbitrarily one or both by cache partitions.
9. method that realizes the magnetic disk array buffer storage partition management comprises:
The cache pool management devices carries out partition management to the spatial cache in the cache pool, at applied business cache partitions is set.
10. in accordance with the method for claim 9, it is characterized in that, also comprise:
Application services module sends the request of reading and writing data to described cache pool management devices;
Described cache pool management devices is according to the described request of reading and writing data that receives, and the corresponding data of applied business read by the cache partitions of distributing in the memory device of rear end with writing.
11. according to claim 9 or 10 described methods, it is characterized in that,
Described cache pool management devices comprises the capacity of unit data piece in cache partitions total volume and the cache partitions at the described cache partitions that applied business is provided with.
12. in accordance with the method for claim 11, it is characterized in that,
Described application services module comprises from rear end memory device reading of data with from described cache partitions reading of data to the request of reading and writing data that described cache pool management devices sends;
Described cache pool management devices is according to the described request of reading and writing data that receives, and the corresponding data of applied business read by the cache partitions of distributing in the memory device of rear end with writing, and specifically comprise:
Search state effective data block in the corresponding cache partitions according to the described request of reading and writing data that receives, distribute cache partitions with one or more data blocks according to lookup result, and will write the cache partitions of distribution from the data that the rear end memory device reads, and/or the data that will read from the respective cache subregion return to described application services module.
13. in accordance with the method for claim 12, it is characterized in that, described cache pool management devices distributes the cache partitions with one or more data blocks according to lookup result, and will write the cache partitions of distribution from the data that the rear end memory device reads, and/or the data that will read from the respective cache subregion return to described application services module, specifically comprise:
According to described lookup result is data hit, then the data mode with described cache partitions is updated to useful, and if the described request of reading and writing data is from rear end disk reading of data, be miss of data perhaps according to described lookup result, then distribute described cache partitions with one or more data blocks, to write the cache partitions of distribution from the data that described rear end memory device reads, and the data that will read return to described application services module from this cache partitions;
Be data hit according to described lookup result perhaps, and the described request of reading and writing data is from described cache partitions reading of data, then the data that will directly read from this cache partitions return to described application services module.
14. according to claim 10,12 or 13 each described methods, it is characterized in that, also comprise:
Described cache pool management devices is at set intervals if the empty number of data blocks in the query caching pond is lower than low limit value, then state in the cache partitions is in useless data block and is recovered in the described cache pool, and the data block state that reclaims is updated to sky according to the buffer memory take-back strategy; Described buffer memory take-back strategy comprises according to priority take-back strategy and satisfies lower bound data block take-back strategy by cache partitions.
CN2011102056281A 2011-07-21 2011-07-21 System, device and method for realizing disk array cache partition management Pending CN102262512A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011102056281A CN102262512A (en) 2011-07-21 2011-07-21 System, device and method for realizing disk array cache partition management

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011102056281A CN102262512A (en) 2011-07-21 2011-07-21 System, device and method for realizing disk array cache partition management

Publications (1)

Publication Number Publication Date
CN102262512A true CN102262512A (en) 2011-11-30

Family

ID=45009152

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011102056281A Pending CN102262512A (en) 2011-07-21 2011-07-21 System, device and method for realizing disk array cache partition management

Country Status (1)

Country Link
CN (1) CN102262512A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103246616A (en) * 2013-05-24 2013-08-14 浪潮电子信息产业股份有限公司 Global shared cache replacement method for realizing long-short cycle access frequency
CN103279429A (en) * 2013-05-24 2013-09-04 浪潮电子信息产业股份有限公司 Application-aware distributed global shared cache partition method
CN103577336A (en) * 2013-10-23 2014-02-12 华为技术有限公司 Stored data processing method and device
CN105573682A (en) * 2016-02-25 2016-05-11 浪潮(北京)电子信息产业有限公司 SAN storage system and data read-write method thereof
CN107220186A (en) * 2017-07-03 2017-09-29 福建新和兴信息技术有限公司 The buffer memory management method and terminal of business object in android system
CN107241444A (en) * 2017-07-31 2017-10-10 郑州云海信息技术有限公司 A kind of distributed caching data management system, method and device
CN107562367A (en) * 2016-07-01 2018-01-09 阿里巴巴集团控股有限公司 Method and device based on software implementation storage system read-write data
CN107783732A (en) * 2017-10-30 2018-03-09 郑州云海信息技术有限公司 A kind of data read-write method, system, equipment and computer-readable storage medium
CN108241538A (en) * 2017-12-28 2018-07-03 深圳忆联信息系统有限公司 The management method and solid state disk of RAID resources in a kind of solid state disk
CN113064553A (en) * 2021-04-02 2021-07-02 重庆紫光华山智安科技有限公司 Data storage method, device, equipment and medium
WO2023060943A1 (en) * 2021-10-14 2023-04-20 华为技术有限公司 Traffic control method and apparatus
WO2024088150A1 (en) * 2022-10-25 2024-05-02 中兴通讯股份有限公司 Data storage method and apparatus based on open-channel solid state drive, device, medium, and product

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101105773A (en) * 2007-08-20 2008-01-16 杭州华三通信技术有限公司 Method and device for implementing data storage using cache
CN101609432A (en) * 2009-07-13 2009-12-23 中国科学院计算技术研究所 Shared buffer memory management system and method
CN102043732A (en) * 2010-12-30 2011-05-04 成都市华为赛门铁克科技有限公司 Cache allocation method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101105773A (en) * 2007-08-20 2008-01-16 杭州华三通信技术有限公司 Method and device for implementing data storage using cache
CN101609432A (en) * 2009-07-13 2009-12-23 中国科学院计算技术研究所 Shared buffer memory management system and method
CN102043732A (en) * 2010-12-30 2011-05-04 成都市华为赛门铁克科技有限公司 Cache allocation method and device

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103246616B (en) * 2013-05-24 2017-09-26 浪潮电子信息产业股份有限公司 A kind of globally shared buffer replacing method of access frequency within long and short cycle
CN103279429A (en) * 2013-05-24 2013-09-04 浪潮电子信息产业股份有限公司 Application-aware distributed global shared cache partition method
CN103246616A (en) * 2013-05-24 2013-08-14 浪潮电子信息产业股份有限公司 Global shared cache replacement method for realizing long-short cycle access frequency
CN103577336A (en) * 2013-10-23 2014-02-12 华为技术有限公司 Stored data processing method and device
WO2015058493A1 (en) * 2013-10-23 2015-04-30 华为技术有限公司 Storage data processing method and device
CN103577336B (en) * 2013-10-23 2017-03-08 华为技术有限公司 A kind of stored data processing method and device
CN105573682B (en) * 2016-02-25 2018-10-30 浪潮(北京)电子信息产业有限公司 A kind of SAN storage system and its data read-write method
CN105573682A (en) * 2016-02-25 2016-05-11 浪潮(北京)电子信息产业有限公司 SAN storage system and data read-write method thereof
CN107562367A (en) * 2016-07-01 2018-01-09 阿里巴巴集团控股有限公司 Method and device based on software implementation storage system read-write data
CN107562367B (en) * 2016-07-01 2021-04-02 阿里巴巴集团控股有限公司 Method and device for reading and writing data based on software storage system
CN107220186A (en) * 2017-07-03 2017-09-29 福建新和兴信息技术有限公司 The buffer memory management method and terminal of business object in android system
CN107241444A (en) * 2017-07-31 2017-10-10 郑州云海信息技术有限公司 A kind of distributed caching data management system, method and device
CN107241444B (en) * 2017-07-31 2020-07-07 郑州云海信息技术有限公司 Distributed cache data management system, method and device
CN107783732A (en) * 2017-10-30 2018-03-09 郑州云海信息技术有限公司 A kind of data read-write method, system, equipment and computer-readable storage medium
CN108241538A (en) * 2017-12-28 2018-07-03 深圳忆联信息系统有限公司 The management method and solid state disk of RAID resources in a kind of solid state disk
CN113064553A (en) * 2021-04-02 2021-07-02 重庆紫光华山智安科技有限公司 Data storage method, device, equipment and medium
WO2023060943A1 (en) * 2021-10-14 2023-04-20 华为技术有限公司 Traffic control method and apparatus
WO2024088150A1 (en) * 2022-10-25 2024-05-02 中兴通讯股份有限公司 Data storage method and apparatus based on open-channel solid state drive, device, medium, and product

Similar Documents

Publication Publication Date Title
CN102262512A (en) System, device and method for realizing disk array cache partition management
CN110825748B (en) High-performance and easily-expandable key value storage method by utilizing differentiated indexing mechanism
US9792227B2 (en) Heterogeneous unified memory
CN103885728B (en) A kind of disk buffering system based on solid-state disk
CN104317742B (en) Automatic thin-provisioning method for optimizing space management
CN102331986B (en) Database cache management method and database server
CN103678169B (en) A kind of method and system of efficiency utilization solid-state disk buffer memory
CN102063406B (en) Network shared Cache for multi-core processor and directory control method thereof
CN105242881A (en) Distributed storage system and data read-write method for same
CN102087586B (en) Data processing method and device
US20120117328A1 (en) Managing a Storage Cache Utilizing Externally Assigned Cache Priority Tags
CN104317736B (en) A kind of distributed file system multi-level buffer implementation method
CN105138292A (en) Disk data reading method
CN103577339A (en) Method and system for storing data
WO2019085769A1 (en) Tiered data storage and tiered query method and apparatus
CN102446139B (en) Method and device for data storage
CN102339283A (en) Access control method for cluster file system and cluster node
CN101997918A (en) Method for allocating mass storage resources according to needs in heterogeneous SAN (Storage Area Network) environment
CN102651009A (en) Method and equipment for retrieving data in storage system
CN104765575A (en) Information storage processing method
CN103037004A (en) Implement method and device of cloud storage system operation
CN114546296B (en) ZNS solid state disk-based full flash memory system and address mapping method
CN101673271A (en) Distributed file system and file sharding method thereof
CN103838853A (en) Mixed file system based on different storage media
US20190004968A1 (en) Cache management method, storage system and computer program product

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20111130