CN103761052A - Method for managing cache and storage device - Google Patents

Method for managing cache and storage device Download PDF

Info

Publication number
CN103761052A
CN103761052A CN201310740472.6A CN201310740472A CN103761052A CN 103761052 A CN103761052 A CN 103761052A CN 201310740472 A CN201310740472 A CN 201310740472A CN 103761052 A CN103761052 A CN 103761052A
Authority
CN
China
Prior art keywords
data
target data
queue
speed cache
memory storage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310740472.6A
Other languages
Chinese (zh)
Other versions
CN103761052B (en
Inventor
龚涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201310740472.6A priority Critical patent/CN103761052B/en
Publication of CN103761052A publication Critical patent/CN103761052A/en
Application granted granted Critical
Publication of CN103761052B publication Critical patent/CN103761052B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

An embodiment of the invention discloses a method for managing a cache and a storage device. The method is applied to the storage device. The storage device comprises the cache. The method comprises that the storage device determines whether target data saved in the cache are data in the sequence flow; the storage device writes the target data in a data elimination queue in the cache when determining that the target data are data in the sequence flow; the storage device performs elimination on the target data which are written in the data elimination queue by the cache according to a first-in first-out principle. According to the method and the storage device, sequence flow data in the cache can be eliminated rapidly, and the efficient utilization rate of the cache can be guaranteed.

Description

A kind of method of management of cache and memory storage
Technical field
The present invention relates to field of data storage, relate in particular to a kind of method and memory storage of management of cache.
Background technology
High-speed cache is the single-level memory being present between main memory (such as, hard disk) and CPU, static store chip (Static RAM, SRAM), consists of, and capacity is smaller but speed is more much higher than main memory, close to the speed of CPU.Principle of locality is the theoretical foundation of high-speed cache.Principle of locality is divided into temporal locality and spatial locality.Wherein, temporal locality refers to: if data are accessed at time point T0, in a period of time starting from T0 so, the possibility that these data are accessed again can be higher constantly than T0.Spatial locality refers to: if data are accessed at time point T0, in a period of time starting from T0 so, the accessed possibility of these data other data is around constantly higher than T0.
Because the existence of principle of locality, make people can record according to access history the I/O reference situation of predict future, and then will in those hard disks, estimate that the data that relatively can be accessed frequently rise in the storage medium that performance is higher, periodically data are synchronously returned again to low-speed device (in hard disk), thereby promote the performance of whole system.
At present, memory storage can manage the data in high-speed cache by round robin algorithm (CLOCK algorithm), in CLOCK algorithm, by the data in high-speed cache from being organized in logic a ring-type data structure, and be furnished with a pointer, be called CLOCK pointer.CLOCK pointer is according to certain speed, according to the data in the high-speed cache in ring-type data structure described in clockwise or counter clockwise direction searching loop.Meanwhile, each data in high-speed cache have a status attribute parameter recency,
When data are accessed, its recency value 1, when CLOCK pointer rotates to data, when the recency of described data value is 1, the recency of described data is revised as to 0, and the recency value of described data is 0 o'clock, and described data are eliminated from described high-speed cache.
Above-mentioned prior art provides a kind of method that data in high-speed cache are managed, but, the holding time of the frequency determination data of rotating with CLOCK pointer in the prior art in high-speed cache, when accessed data are the data in sequential flow (sequential flow comprises the data that two or more addresses in hard disk are continuous), the data of sequential flow can take whole buffer memory very soon, thereby the space of waste high-speed cache, causes the utilization ratio of high-speed cache lower.
Summary of the invention
The embodiment of the present invention provides a kind of method and memory storage of management of cache, can effectively the sequential flow data in high-speed cache be eliminated fast, guarantees the efficient utilization factor of high-speed cache.
First aspect present invention provides a kind of method of management of cache, and described method is applied in memory storage, and described memory storage comprises high-speed cache, and described method comprises:
Described memory storage determines whether the target data being kept in described high-speed cache is the data in sequential flow;
When described memory storage determines that described target data is the data in sequential flow, the data that described target data write to described high-speed cache are eliminated queue;
Described memory storage, according to the principle of first in first out, is eliminated processing to the described target data writing in described high-speed cache in the superseded queue of described data.
In conjunction with first aspect, in the feasible embodiment of the first, described method also can comprise:
When described memory storage, determining described target data is not the data in sequential flow, and while turning pointer searching loop to described target data by wheel, the value of the current state property parameters of described target data is the first numerical value, described target data data writing is eliminated to candidate queue, and after the superseded queue of described data is sky, according to the principle of first in first out, the described target data writing in described high-speed cache in the superseded candidate queue of described data is eliminated to processing.
In conjunction with first aspect, in the feasible embodiment of the second, described method also can comprise:
When described memory storage, determining target data is not the data in sequential flow, and while turning pointer searching loop to described target data by wheel, the value of the current state property parameters of described target data is greater than the first numerical value, according to the value of the current state property parameters of target data described in the default rules modification that successively decreases.
In conjunction with the feasible embodiment of the first of first aspect, in the third feasible embodiment, described method also can comprise:
When the request of access of target data described in described memory storage receives request access, if described target data is eliminated in queue or the superseded candidate queue of described data in described data, from described high-speed cache, obtain described target data, described target data is eliminated queue or described data are eliminated in candidate queue and deleted from described data, and according to the value that increase progressively rules modification described in the current state property parameters of target data reverse with the described rule of successively decreasing.
In conjunction with the feasible embodiment of the second of first aspect, in the 4th kind of feasible embodiment, described method also can comprise:
When the request of access of target data described in described memory storage receives request access, from described high-speed cache, obtain described target data, and according to the reverse value that increases progressively the current state property parameters of target data described in rules modification of the described rule of successively decreasing.
Second aspect present invention provides a kind of memory storage, comprises high-speed cache, and described memory storage also comprises:
Determination module, whether the target data that is kept at described high-speed cache for determining is the data in sequential flow;
The first processing module, for when described determination module determines that described target data is the data of sequential flow, the data that described target data write to described high-speed cache are eliminated queue, and according to the principle of first in first out, the described target data writing in described high-speed cache in the superseded queue of described data is eliminated to processing.
In conjunction with second aspect, in the feasible embodiment of the first, described memory storage also can comprise:
The second processing module, for determining that when described determination module target data is not the data of sequential flow, and while turning pointer searching loop to described target data by wheel, the value of the current state property parameters of described target data is the first numerical value, described target data data writing is eliminated to candidate queue, and after the superseded queue of described data is sky, according to the principle of first in first out, the described target data writing in described high-speed cache in the superseded candidate queue of described data is eliminated to processing.
In conjunction with second aspect, in the feasible embodiment of the second, described memory storage also can comprise:
The 3rd processing module, for determining that when described determination module described target data is not the data of sequential flow, and while turning pointer searching loop to described target data by wheel, the value of the current state property parameters of described target data is greater than the first numerical value, according to the value of the current state property parameters of target data described in the default rules modification that successively decreases.
In conjunction with the feasible embodiment of the first of second aspect, in the third feasible embodiment, described the second processing module, during also for request of access when target data described in described memory storage receives request access, if described target data is eliminated in queue or the superseded candidate queue of described data in described data, from described high-speed cache, obtain described target data, described target data is deleted from the superseded queue of described data or in the superseded candidate queue of described data, and according to the reverse value that increases progressively the current state property parameters of target data described in rules modification of the described rule of successively decreasing.
In conjunction with the feasible embodiment of the second of second aspect, in the 4th kind of feasible embodiment, described the 3rd processing module, during also for request of access when target data described in described memory storage receives request access, from described high-speed cache, obtain described target data, and according to the reverse value that increases progressively the current state property parameters of target data described in rules modification of the described rule of successively decreasing.
Therefore in feasible embodiments more of the present invention, described memory storage determines whether the target data being kept in described high-speed cache is the data in sequential flow; When described memory storage determines that described target data is the data in sequential flow, the data that described target data write to described high-speed cache are eliminated queue; Described memory storage, according to the principle of first in first out, is eliminated processing to the described target data writing in described high-speed cache in the superseded queue of described data.Thus, the embodiment of the present invention is eliminated eliminating of target data in queue management sequential flow by data, can effectively the target data that belongs to sequential flow in high-speed cache be eliminated, and guarantees the efficient utilization factor of high-speed cache.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of an embodiment of the method for management of cache of the present invention.
Fig. 2 is the schematic flow sheet of another embodiment of the method for management of cache of the present invention.
Fig. 3 is that the structure of an embodiment of memory storage of the present invention forms schematic diagram.
Fig. 4 is that the structure of another embodiment of memory storage of the present invention forms schematic diagram.
Fig. 5 is that the structure of another embodiment of memory storage of the present invention forms schematic diagram.
Embodiment
For making the object, technical solutions and advantages of the present invention clearer, below in conjunction with accompanying drawing, the present invention is described in further detail.
Fig. 1 is the schematic flow sheet of an embodiment of the method for management of cache of the present invention.In specific implementation, the method for the management of cache of the embodiment of the present invention can be applicable in memory storage, and described memory storage comprises high-speed cache.As shown in Figure 1, the method for the embodiment of the present invention can comprise:
Step S110, memory storage determines whether the target data being kept in described high-speed cache is the data in sequential flow.
In specific implementation, the sequential flow of the embodiment of the present invention can comprise two or more continuous data in address in the low-speed devices such as hard disk, and the determination methods of sequential flow is varied, than not repeating.Take hard disk as example, such as, such as, when four fast addresses of the logic by the data of connected reference increase progressively successively, can think sequential flow of these four data formations, wherein four data is the data in this sequential flow.
Step S111, when described memory storage determines that described target data is the data in sequential flow, the data that described target data write to described high-speed cache are eliminated queue.
In specific implementation, at step S111, when described target data being write to the superseded queue of data of described high-speed cache, if other data in described sequential flow do not write described data and eliminate in queue, the embodiment of the present invention, also can also write other data in the sequential flow under described target data described data and eliminate in queue.And the discharge order that the data in sequential flow are eliminated in queue in described data can be consistent with the order of described data in sequential flow.
Suppose, the address of the data in a sequential flow of the data in hard disk is respectively: 1,2,3,4.When memory storage is when to receive reference address be 4 data, when the data write cache that is 4 address, in inspection hard disk, address is that 1,2,3 data are all in high-speed cache, if all, in high-speed cache, at this moment, the data that can identify address 4 are the data in sequential flow, like this, memory storage can be when the data of the data write cache that is 4 by address be eliminated queue, and, the data of address 1-3 are also write to described data and eliminate queue.
Certainly, in specific implementation, when described target data being write to the superseded queue of data of described high-speed cache, other data in sequential flow under described target data also may write described data and eliminate queue, now, only described target data is write to described data and eliminate queue.
Step S112, described memory storage, according to the principle of first in first out, is eliminated processing to the described target data writing in described high-speed cache in the superseded queue of described data.
In specific implementation, the data of the embodiment of the present invention are eliminated queue and be can be first in first out (First In First Out, FIFO) queue, while eliminating data like this superseded queue from data, can preferentially eliminate the data that are introduced into queue.Certainly, the data of the embodiment of the present invention are eliminated queue and also be can be other forms of queue, are not limited to FIFO.
Therefore, in the above embodiment of the present invention, eliminating of the data of sequential flow eliminated to queue by data separately and complete, can effectively the sequential flow data in high-speed cache be eliminated fast, guarantee the efficient utilization factor of high-speed cache.
Fig. 2 is the schematic flow sheet of another embodiment of the method for management of cache of the present invention.As shown in Figure 2, it can comprise:
Step S210, memory storage determines whether the target data being kept in described high-speed cache is the data in sequential flow.
In specific implementation, step S210 and step S110 can be identical, at this, can not repeat.
Step S211, when described memory storage determines that described target data is the data in sequential flow, the data that described target data write to described high-speed cache are eliminated queue.
In specific implementation, step S211 and step S111 can be identical, at this, can not repeat.
In specific implementation, after described target data being write to the superseded queue of data of described high-speed cache, also the current state property parameters of described target data (can be able to be noted by abridging and is: recency1) value is the first numerical value.In specific implementation, whether the embodiment of the present invention can come indicating target data to be about to be eliminated by the value of current state property parameters.Such as, when the current state property parameters recency1 of described target data value be the first numerical value (such as, the first numerical value can be " 0 ") time, represent that described target data is superseded data from high-speed cache soon.
Step S212, described memory storage, according to the principle of first in first out, is eliminated processing to the described target data writing in described high-speed cache in the superseded queue of described data.
In specific implementation, step S212 can be identical with step S112, at this, can not repeat.
Step S221, when described memory storage, determining described target data is not the data in sequential flow, and while turning pointer searching loop to described target data by wheel, the value of the current state property parameters of described target data is the first numerical value, described target data data writing is eliminated to candidate queue, and after the superseded queue of described data is sky, according to the principle of first in first out, the described target data writing in described high-speed cache in the superseded candidate queue of described data is eliminated to processing.
Step S222, when the request of access of target data described in described memory storage receives request access, if described target data is eliminated in queue or the superseded candidate queue of described data in described data, from described high-speed cache, obtain described target data, described target data is eliminated queue or described data are eliminated in candidate queue and deleted from described data, and according to the value that increase progressively rules modification described in the current state property parameters of target data reverse with the described rule of successively decreasing.
Step S231, when described memory storage, determining described target data is not the data in sequential flow, and while turning pointer searching loop to described target data by wheel, the value of the current state property parameters of described target data is greater than the first numerical value, according to the value of the current state property parameters of target data described in the default rules modification that successively decreases.
Step S232, when the request of access of target data described in described memory storage receives request access, from described high-speed cache, obtain described target data, and according to the reverse value that increases progressively the current state property parameters of target data described in rules modification of the described rule of successively decreasing.
In specific implementation, at step S221 or step S231, the embodiment of the present invention can turn the data in pointer searching loop high-speed cache by wheel, and according to wheel, turn the value of the current state property parameters of the target data that pointer traverses, whether the described target data in high-speed cache is needed to be eliminated and carry out respective handling.Such as, at step S221 or step S231, can by CLOCK pointer according to predetermined speed and predetermined direction (such as, data in high-speed cache described in searching loop clockwise or counterclockwise), when the value of the current state property parameters of described target data is the first numerical value, described target data data writing is eliminated to candidate queue, and after the superseded queue of described data is sky, according to the principle of first in first out, the described target data writing in described high-speed cache in the superseded candidate queue of described data is eliminated to processing.As previously mentioned, when the current state property parameters recency1 of described target data value be the first numerical value (such as, the first numerical value can be " 0 ") time, represent that described target data is superseded data from high-speed cache soon; And the value of working as the current state property parameters recency1 of described target data is greater than the first numerical value, according to the value of the current state property parameters of target data described in the default rules modification that successively decreases.In specific implementation, when the value of the current state property parameters recency1 of target data be greater than the first numerical value (such as, value is " 1 "), be illustrated in wheel and turn before pointer is recycled to described target data next time, described target data can not eliminated from described high-speed cache.
The rule of successively decreasing described in the described rules modification that successively decreases according to presetting in the value of the current state property parameters of target data can be the n that successively decreases, and wherein, n is positive integer.The embodiment of the present invention, when the value of the current state property parameters recency1 of target data be greater than the first numerical value (such as, the value of recency1 can be " 1 " or " 2 " or other) time, effect according to the value of the current state property parameters recency1 of target data described in the default rules modification that successively decreases is, the current state property parameters recency1 that makes described target data is along with wheel turns the increase of the number of times of pointer circulation, reduce gradually, until be reduced to described the first numerical value, thus, the final step S221 of passing through eliminates described target data data writing in candidate queue, to described target data is eliminated to processing.
As can be seen here, the effect of the CLOCK pointer of the embodiment of the present invention is no longer directly used in superseded data as prior art, and be only to select some data that are about to be eliminated to put into eliminate candidate queue, by eliminating candidate queue, according to the real of the regular implementation data of predetermined first in first out, eliminate.Can effectively to the performance of the storage array of high-speed cache, carry out peak load shifting.
In specific implementation, in step S222, taken into full account the data that are about to be eliminated and again become the processing scheme after the accessed data of focus.Concrete, at step S222, when the target data in the superseded queue of data or the superseded candidate queue of data is accessed, described target data is eliminated queue or described data are eliminated in candidate queue and deleted from described data, and according to the value that increase progressively rules modification described in the current state property parameters of target data reverse with the described rule of successively decreasing.Thus, the target data that can avoid becoming focus is performed to eliminate to be processed.
Wherein, in step S222 or step S232 according to the reverse value that increases progressively the current state property parameters of target data described in rules modification of the described rule of successively decreasing, specifically can be, the value of the current state property parameters of described target data is increased progressively to n.The embodiment of the present invention, the value of the current state property parameters recency1 of described target data is according to the default effect that increases progressively rules modification, make the current state property parameters recency1 of described target data along with the increase of access times, increase gradually, thus, the embodiment of the present invention can make to access target data frequently and can not eliminate in candidate queue at short time data writing, and has avoided the accessed data of focus to eliminate.
In specific implementation, in other embodiment, the method for management of cache of the present invention can only comprise the branch of S210-S221-S222 in Fig. 2; Or the method for management of cache of the present invention also can only comprise the branch of the S210-S231-S232 in Fig. 2.
In specific implementation, the method for the embodiment of the present invention also can comprise: by historic state property parameters, (can be noted by abridging and be: the value that recency2) records history the last time of described current state property parameters recency1.Thus, the embodiment of the present invention can record the accessed state of data in high-speed cache by two status attribute parameters, so just, can record once accessed variation tendency of data, such as, tentation data A, T0 constantly recency1 value is 1, recency2 value is 0, T1 constantly these data A is accessed again, and recency1 value becomes 2, recency2 value and will follow and become 1.From the variation of the value of recency1 constantly of T0 and T1 and recency2 just, data A is increased frequency in T0 to T1 access trend constantly.
In specific implementation, in the embodiment of the present invention, comprise that under the prerequisite of historic state property parameters recency2 and current state property parameters recency1, the method for the embodiment of the present invention also can comprise:
When wheel turn pointer (such as, CLOCK pointer) described in continuous sweep after X data in high-speed cache, in a described X data, there is the historic state property parameters recency2 of Y data to be greater than current state property parameters recency1, create first data of X data described in a cold pointed;
Wherein, described X and Y are positive integer.
Now, wheel turns pointer and can directly move to described cold finger pin position pointed.
Thus, the embodiment of the present invention changes wheel by cold finger pin and turns the indicating positions of pointer, can be convenient to wheel and turn pointer and jump to seldom accessed data area, and directly rapidly these data are moved into, eliminates candidate queue and eliminates.Can improve so the superseded speed of the data of high-speed cache, improve the performance of whole system.
In specific implementation, the method for the embodiment of the present invention also can comprise:
When described data, eliminate data volume in queue when adding data volume in the above superseded candidate queue and being less than first threshold, accelerate described wheel and turn the speed that pointer (such as, CLOCK pointer) travels through the data in described high-speed cache;
When described data are eliminated data volume in queue when adding data volume in the above superseded candidate queue and being greater than Second Threshold, described wheel turned to the speed that pointer (such as, CLOCK pointer) travels through the data in described high-speed cache and be adjusted into described predetermined speed;
Described Second Threshold is greater than described first threshold.In specific implementation, described first threshold and described Second Threshold can be absolute numerical value, also number percent, when adopting number percent, after first number percent being multiplied by the total quantity in high-speed cache, then the data volume sum adding in the above superseded candidate queue with the data volume in the superseded queue of data compares.
Thus, the embodiment of the present invention turns the translational speed of pointer by control wheel, thereby guarantee that the data volume of eliminating the relative low value data in candidate queue and the superseded queue of data remains on a rational level, thereby can reach the effect of the input and output of effective absorption system burst.Accordingly, the embodiment of the present invention provides the embodiment of the memory storage of the method that can be used for implementing management of cache of the present invention.
Fig. 3 is that the structure of an embodiment of memory storage of the present invention forms schematic diagram.As shown in Figure 3, it can comprise: high-speed cache 30, determination module 31 and the first processing module 32, wherein,
High-speed cache 30, for data cached;
Determination module 31, whether the target data that is kept at described high-speed cache 30 for determining is the data in sequential flow.
The first processing module 32, for when described determination module 31 is determined the data that described target data is sequential flow, the data that described target data write to described high-speed cache 30 are eliminated queue, and according to the principle of first in first out, the described target data writing in described high-speed cache 30 in the superseded queue of described data is eliminated to processing.
In specific implementation, the sequential flow of the embodiment of the present invention can comprise two or more continuous data in address in the low-speed devices such as hard disk, and the determination methods of sequential flow is varied, than not repeating.Take hard disk as example, such as, when four fast addresses of the logic by the data of connected reference increase progressively successively, can think sequential flow of these four data formations, wherein four data is the data in this sequential flow.
In specific implementation, when described the first processing module 32 writes the superseded queue of data of described high-speed cache 30 by described target data, if other data in described sequential flow do not write described data and eliminate in queue, in the embodiment of the present invention, described the first processing module 32 also can be used for that other data in the sequential flow under described target data are also write to described data and eliminates in queue.And the discharge order that described the first processing module 32 is eliminated the data in described sequential flow in queue in described data can be consistent with the order of described data in sequential flow.
Suppose, the address of the data in a sequential flow of the data in hard disk is respectively: 1,2,3,4.When memory storage is when to receive reference address be 4 data, when the data write cache that is 4 address, in inspection hard disk, address is that 1,2,3 data are all in high-speed cache, if all in high-speed cache, at this moment, determination module 31 can determine that the data of address 4 are the data in sequential flow, like this, the first processing module 32 can, when the data of the data write cache that is 4 by address are eliminated queue, also write described data by the data of address 1-3 and eliminate queue.
Certainly, in specific implementation, when described target data being write to the superseded queue of data of described high-speed cache, other data in sequential flow under described target data also may write described data and eliminate queue, now, only described target data is write to described data and eliminate queue.
In specific implementation, the data of the embodiment of the present invention are eliminated queue and be can be first in first out (First In First Out, FIFO) queue, while eliminating data like this superseded queue from data, can preferentially eliminate the data that are introduced into queue.Certainly, the data of the embodiment of the present invention are eliminated queue and also be can be other forms of queue, are not limited to FIFO.
Therefore, in the above embodiment of the present invention, eliminating of the data of sequential flow eliminated to queue by data separately and complete, can effectively the sequential flow data in high-speed cache be eliminated fast, guarantee the efficient utilization factor of high-speed cache.
Fig. 4 is that the structure of another embodiment of memory storage of the present invention forms schematic diagram.As shown in Figure 4, it can comprise: high-speed cache 40, determination module 41, the first processing module 42, the second processing module 43 and the 3rd processing module 44, wherein:
Described high-speed cache 40 can be identical with the high-speed cache 30 in Fig. 3, at this, do not repeat.
Described determination module 41 can be identical with the determination module 31 in Fig. 3, at this, do not repeat.
Described the first processing module 42 can be identical with the first processing module 32 in Fig. 3, at this, do not repeat.
In specific implementation, the first processing module 42 of the embodiment of the present invention, after the data that described target data write to described high-speed cache are eliminated queue, also can (can be noted the current state property parameters of described target data by abridging and be: recency1) value is the first numerical value.In specific implementation, whether the embodiment of the present invention can come indicating target data to be about to be eliminated by the value of current state property parameters.Such as, when the current state property parameters recency1 of described target data value be the first numerical value (such as, the first numerical value can be " 0 ") time, represent that described target data is superseded data from high-speed cache soon.
Described the second processing module 43, for determining that when described determination module 41 target data is not the data of sequential flow, and while turning pointer searching loop to described target data by wheel, the value of the current state property parameters of described target data is the first numerical value, described target data data writing is eliminated to candidate queue, and after the superseded queue of described data is sky, according to the principle of first in first out, the described target data writing in described high-speed cache 40 in the superseded candidate queue of described data is eliminated to processing, and when the request of access of target data described in described memory storage receives request access, if described target data is eliminated in queue or the superseded candidate queue of described data in described data, from described high-speed cache 40, obtain described target data, described target data is deleted from the superseded queue of described data or in the superseded candidate queue of described data, and according to the reverse value that increases progressively the current state property parameters of target data described in rules modification of the described rule of successively decreasing.
Described the 3rd processing module 44, for determining that when described determination module 41 described target data is not the data of sequential flow, and while turning pointer searching loop to described target data by wheel, the value of the current state property parameters of described target data is greater than the first numerical value, according to the value of the current state property parameters of target data described in the default rules modification that successively decreases, and when the request of access of target data described in described memory storage receives request access, from described high-speed cache, obtain described target data to described memory storage, and according to the reverse value that increases progressively the current state property parameters of target data described in rules modification of the described rule of successively decreasing.
In specific implementation, the second processing module 43 or the 3rd processing module 44 can be used for turning the data in pointer searching loop high-speed cache by wheel, and according to wheel, turn the value of the current state property parameters of the target data that pointer traverses, whether the described target data in high-speed cache is needed to be eliminated and carry out respective handling.Such as, the second processing module 43 or the 3rd processing module 44, can be used for by CLOCK pointer according to predetermined speed and predetermined direction (such as, data in high-speed cache described in searching loop clockwise or counterclockwise), when the value of the current state property parameters of described target data is the first numerical value, described target data data writing is eliminated to candidate queue, and after the superseded queue of described data is sky, according to the principle of first in first out, the described target data writing in described high-speed cache in the superseded candidate queue of described data is eliminated to processing.As previously mentioned, when the current state property parameters recency1 of described target data value be the first numerical value (such as, the first numerical value can be " 0 ") time, represent that described target data is superseded data from high-speed cache soon; And the value of working as the current state property parameters recency1 of described target data is greater than the first numerical value, according to the value of the current state property parameters of target data described in the default rules modification that successively decreases.In specific implementation, when the value of the current state property parameters recency1 of target data be greater than the first numerical value (such as, value is " 1 "), be illustrated in wheel and turn before pointer is recycled to described target data next time, described target data can not eliminated from described high-speed cache.
Wherein, describedly according to the rule of successively decreasing in the value of the current state property parameters of target data described in the default rules modification that successively decreases, can be the n that successively decreases, wherein, n is positive integer.The embodiment of the present invention, when the value of the current state property parameters recency1 of target data be greater than the first numerical value (such as, the value of recency1 can be " 1 " or " 2 " or other) time, effect according to the value of the current state property parameters recency1 of target data described in the default rules modification that successively decreases is, the current state property parameters recency1 that makes described target data is along with wheel turns the increase of the number of times of pointer circulation, reduce gradually, until be reduced to described the first numerical value, thus, the final step S221 of passing through eliminates described target data data writing in candidate queue, to described target data is eliminated to processing.
As can be seen here, the effect of the CLOCK pointer of the embodiment of the present invention is no longer directly used in superseded data as prior art, and be only to select some data that are about to be eliminated to put into eliminate candidate queue, by eliminating candidate queue, according to the real of the regular implementation data of predetermined first in first out, eliminate.Can effectively to the performance of the storage array of high-speed cache, carry out peak load shifting.
In specific implementation, the embodiment of the present invention, has taken into full account the data that are about to be eliminated and has again become the processing scheme after the accessed data of focus.Concrete, when the target data in the superseded queue of data or the superseded candidate queue of data is accessed, the second processing module 43 can be used for described target data to eliminate queue or described data are eliminated in candidate queue and deleted from described data, and according to the value that increase progressively rules modification described in the current state property parameters of target data reverse with the described rule of successively decreasing.Thus, the target data that can avoid becoming focus is performed to eliminate to be processed.
Wherein, according to the reverse value that increases progressively the current state property parameters of target data described in rules modification of the described rule of successively decreasing, specifically can be, the value of the current state property parameters of described target data is increased progressively to n.The embodiment of the present invention, the value of the current state property parameters recency1 of described target data is according to the default effect that increases progressively rules modification, make the current state property parameters recency1 of described target data along with the increase of access times, increase gradually, thus, the embodiment of the present invention can make to access target data frequently and can not eliminate in candidate queue at short time data writing, and has avoided the accessed data of focus to eliminate.
Further, in other some embodiment, the memory storage of the embodiment of the present invention also can comprise logging modle (not shown) and/or pointer management module (not shown), wherein:
Described logging modle, for recording the value of history the last time of described current state property parameters recency1 by historic state property parameters recency2.Thus, the embodiment of the present invention can record the accessed state of data in high-speed cache by two status attribute parameters, so just, can record once accessed variation tendency of data, such as, tentation data A, T0 constantly recency1 value is 1, recency2 value is 0, T1 constantly these data A is accessed again, and recency1 value becomes 2, recency2 value and will follow and become 1.From the variation of the value of recency1 constantly of T0 and T1 and recency2 just, data A is increased frequency in T0 to T1 access trend constantly.
Described pointer management module, for when wheel turn pointer (such as, CLOCK pointer) described in continuous sweep after X data in high-speed cache, in a described X data, there is the historic state property parameters recency2 of Y data to be greater than current state property parameters recency1, create first data of X data described in a cold pointed, and described wheel turned to pointer and directly move to described cold finger pin data pointed, wherein, described X and Y are positive integer.
Thus, the embodiment of the present invention changes wheel by cold finger pin and turns the indicating positions of pointer, can be convenient to wheel and turn pointer and jump to seldom accessed data area, and directly rapidly these data are moved into, eliminates candidate queue and eliminates.Can improve so the superseded speed of the data of high-speed cache, improve the performance of whole system.
In specific implementation, described pointer management module is also for eliminating the data volume of queue when adding data volume in the above superseded candidate queue and being less than first threshold when described data, accelerate described wheel and turn the speed that pointer (such as, CLOCK pointer) travels through the data in described high-speed cache; When described data are eliminated data volume in queue when adding data volume in the above superseded candidate queue and being greater than Second Threshold, described wheel turned to the speed that pointer (such as, CLOCK pointer) travels through the data in described high-speed cache and be adjusted into described predetermined speed; Described Second Threshold is greater than described first threshold.In specific implementation, described first threshold and described Second Threshold can be absolute numerical value, also number percent, when adopting number percent, after first number percent being multiplied by the total quantity in high-speed cache, then the data volume sum adding in the above superseded candidate queue with the data volume in the superseded queue of data compares.
Thus, the embodiment of the present invention turns the translational speed of pointer by control wheel, thereby guarantee that the data volume of eliminating the relative low value data in candidate queue and the superseded queue of data remains on a rational level, thereby can reach the effect of the input and output of effective absorption system burst.
Above embodiment be all from the angle of the functional module that comprises of the equipment of management of cache the structure of equipment is formed and is limited, in specific implementation, as shown in Figure 5, a kind of hardware configuration embodiment as the memory storage of the embodiment of the present invention, memory storage of the present invention can comprise storer 51 and processor 52, wherein, storer 51 comprises high-speed cache, in addition, in described storer 51, also store specific program code, processor 52 can be by calling the described specific program code in storer 51, for carrying out the performed flow process of memory storage shown in Fig. 1 of the present invention or Fig. 2 embodiment.
Cited is only preferred embodiment of the present invention above, certainly can not limit with this interest field of the present invention, and the equivalent variations of therefore doing according to the claims in the present invention, still belongs to the scope that the present invention is contained.

Claims (10)

1. a method for management of cache, is characterized in that, described method is applied in memory storage, and described memory storage comprises high-speed cache, and described method comprises:
Described memory storage determines whether the target data being kept in described high-speed cache is the data in sequential flow;
When described memory storage determines that described target data is the data in sequential flow, the data that described target data write to described high-speed cache are eliminated queue;
Described memory storage, according to the principle of first in first out, is eliminated processing to the described target data writing in described high-speed cache in the superseded queue of described data.
2. the method for management of cache as claimed in claim 1, is characterized in that, also comprises:
When described memory storage, determining described target data is not the data in sequential flow, and while turning pointer searching loop to described target data by wheel, the value of the current state property parameters of described target data is the first numerical value, described target data data writing is eliminated to candidate queue, and after the superseded queue of described data is sky, according to the principle of first in first out, the described target data writing in described high-speed cache in the superseded candidate queue of described data is eliminated to processing.
3. the method for management of cache as claimed in claim 1, is characterized in that, also comprises:
When described memory storage, determining target data is not the data in sequential flow, and while turning pointer searching loop to described target data by wheel, the value of the current state property parameters of described target data is greater than the first numerical value, according to the value of the current state property parameters of target data described in the default rules modification that successively decreases.
4. the method for management of cache as claimed in claim 2, is characterized in that, also comprises:
When the request of access of target data described in described memory storage receives request access, if described target data is eliminated in queue or the superseded candidate queue of described data in described data, from described high-speed cache, obtain described target data, described target data is eliminated queue or described data are eliminated in candidate queue and deleted from described data, and according to the value that increase progressively rules modification described in the current state property parameters of target data reverse with the described rule of successively decreasing.
5. the method for management of cache as claimed in claim 3, is characterized in that, also comprises:
When the request of access of target data described in described memory storage receives request access, from described high-speed cache, obtain described target data, and according to the reverse value that increases progressively the current state property parameters of target data described in rules modification of the described rule of successively decreasing.
6. a memory storage, comprises high-speed cache, it is characterized in that, described memory storage also comprises:
Determination module, whether the target data that is kept at described high-speed cache for determining is the data in sequential flow;
The first processing module, for when described determination module determines that described target data is the data of sequential flow, the data that described target data write to described high-speed cache are eliminated queue, and according to the principle of first in first out, the described target data writing in described high-speed cache in the superseded queue of described data is eliminated to processing.
7. memory storage as claimed in claim 6, is characterized in that, also comprises:
The second processing module, for determining that when described determination module target data is not the data of sequential flow, and while turning pointer searching loop to described target data by wheel, the value of the current state property parameters of described target data is the first numerical value, described target data data writing is eliminated to candidate queue, and after the superseded queue of described data is sky, according to the principle of first in first out, the described target data writing in described high-speed cache in the superseded candidate queue of described data is eliminated to processing.
8. memory storage as claimed in claim 6, is characterized in that, also comprises:
The 3rd processing module, for determining that when described determination module described target data is not the data of sequential flow, and while turning pointer searching loop to described target data by wheel, the value of the current state property parameters of described target data is greater than the first numerical value, according to the value of the current state property parameters of target data described in the default rules modification that successively decreases.
9. memory storage as claimed in claim 7, it is characterized in that, described the second processing module, during also for request of access when target data described in described memory storage receives request access, if described target data is eliminated in queue or the superseded candidate queue of described data in described data, from described high-speed cache, obtain described target data, described target data is deleted from the superseded queue of described data or in the superseded candidate queue of described data, and according to the reverse value that increases progressively the current state property parameters of target data described in rules modification of the described rule of successively decreasing.
10. memory storage as claimed in claim 8, it is characterized in that, described the 3rd processing module, during also for request of access when target data described in described memory storage receives request access, from described high-speed cache, obtain described target data, and according to the reverse value that increases progressively the current state property parameters of target data described in rules modification of the described rule of successively decreasing.
CN201310740472.6A 2013-12-28 2013-12-28 A kind of method managing cache and storage device Active CN103761052B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310740472.6A CN103761052B (en) 2013-12-28 2013-12-28 A kind of method managing cache and storage device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310740472.6A CN103761052B (en) 2013-12-28 2013-12-28 A kind of method managing cache and storage device

Publications (2)

Publication Number Publication Date
CN103761052A true CN103761052A (en) 2014-04-30
CN103761052B CN103761052B (en) 2016-12-07

Family

ID=50528297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310740472.6A Active CN103761052B (en) 2013-12-28 2013-12-28 A kind of method managing cache and storage device

Country Status (1)

Country Link
CN (1) CN103761052B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106294211A (en) * 2016-08-08 2017-01-04 浪潮(北京)电子信息产业有限公司 The detection method of a kind of multichannel sequential flow and device
CN106991060A (en) * 2017-02-27 2017-07-28 华为技术有限公司 A kind of reading cache superseded optimization method and device
CN111984323A (en) * 2019-05-21 2020-11-24 三星电子株式会社 Processing apparatus for distributing micro-operations to micro-operation cache and method of operating the same
CN113791989A (en) * 2021-09-15 2021-12-14 深圳市中科蓝讯科技股份有限公司 Cache data processing method based on cache, storage medium and chip

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060106985A1 (en) * 2004-11-12 2006-05-18 International Business Machines Corporation Method and systems for executing load instructions that achieve sequential load consistency
US20070074150A1 (en) * 2005-08-31 2007-03-29 Jolfaei Masoud A Queued asynchrounous remote function call dependency management
CN101561783A (en) * 2008-04-14 2009-10-21 阿里巴巴集团控股有限公司 Method and device for Cache asynchronous elimination
CN101794259A (en) * 2010-03-26 2010-08-04 成都市华为赛门铁克科技有限公司 Data storage method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060106985A1 (en) * 2004-11-12 2006-05-18 International Business Machines Corporation Method and systems for executing load instructions that achieve sequential load consistency
US20070074150A1 (en) * 2005-08-31 2007-03-29 Jolfaei Masoud A Queued asynchrounous remote function call dependency management
CN101561783A (en) * 2008-04-14 2009-10-21 阿里巴巴集团控股有限公司 Method and device for Cache asynchronous elimination
CN101794259A (en) * 2010-03-26 2010-08-04 成都市华为赛门铁克科技有限公司 Data storage method and device

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106294211A (en) * 2016-08-08 2017-01-04 浪潮(北京)电子信息产业有限公司 The detection method of a kind of multichannel sequential flow and device
CN106294211B (en) * 2016-08-08 2019-05-28 浪潮(北京)电子信息产业有限公司 A kind of detection method and device of multichannel sequential flow
CN106991060A (en) * 2017-02-27 2017-07-28 华为技术有限公司 A kind of reading cache superseded optimization method and device
CN106991060B (en) * 2017-02-27 2020-04-14 华为技术有限公司 Elimination optimization method and device for read cache
CN111984323A (en) * 2019-05-21 2020-11-24 三星电子株式会社 Processing apparatus for distributing micro-operations to micro-operation cache and method of operating the same
CN113791989A (en) * 2021-09-15 2021-12-14 深圳市中科蓝讯科技股份有限公司 Cache data processing method based on cache, storage medium and chip
CN113791989B (en) * 2021-09-15 2023-07-14 深圳市中科蓝讯科技股份有限公司 Cache-based cache data processing method, storage medium and chip

Also Published As

Publication number Publication date
CN103761052B (en) 2016-12-07

Similar Documents

Publication Publication Date Title
US9021189B2 (en) System and method for performing efficient processing of data stored in a storage node
US10089014B2 (en) Memory-sampling based migrating page cache
US20090132770A1 (en) Data Cache Architecture and Cache Algorithm Used Therein
CN105808455B (en) Memory access method, storage-class memory and computer system
CN104503703B (en) The treating method and apparatus of caching
CN110413211B (en) Storage management method, electronic device, and computer-readable medium
CN103631624A (en) Method and device for processing read-write request
US7895397B2 (en) Using inter-arrival times of data requests to cache data in a computing environment
CN105094709A (en) Dynamic data compression method for solid-state disc storage system
WO2022199027A1 (en) Random write method, electronic device and storage medium
CN106897026B (en) Nonvolatile memory device and address classification method thereof
CN103761052A (en) Method for managing cache and storage device
CN109359729B (en) System and method for realizing data caching on FPGA
CN113377690A (en) Solid state disk processing method suitable for user requests of different sizes
US9063863B2 (en) Systems and methods for background destaging storage tracks
US8732404B2 (en) Method and apparatus for managing buffer cache to perform page replacement by using reference time information regarding time at which page is referred to
US20160048447A1 (en) Magnetoresistive random-access memory cache write management
CN104834478A (en) Data writing and reading method based on heterogeneous hybrid storage device
CN105630699B (en) A kind of solid state hard disk and read-write cache management method using MRAM
TWI652570B (en) Non-volatile memory apparatus and address classification method thereof
US9262098B2 (en) Pipelined data I/O controller and system for semiconductor memory
CN103176753B (en) Storing device and data managing method thereof
KR20110115759A (en) Buffer cache managing method using ssd(solid state disk) extension buffer and apparatus for using ssd(solid state disk) as extension buffer
CN106649143B (en) Cache access method and device and electronic equipment
CN111859038A (en) Data heat degree statistical method and device for distributed storage system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant