CN107608911B - Cache data flashing method, device, equipment and storage medium - Google Patents

Cache data flashing method, device, equipment and storage medium Download PDF

Info

Publication number
CN107608911B
CN107608911B CN201710817298.9A CN201710817298A CN107608911B CN 107608911 B CN107608911 B CN 107608911B CN 201710817298 A CN201710817298 A CN 201710817298A CN 107608911 B CN107608911 B CN 107608911B
Authority
CN
China
Prior art keywords
cache
data
cache partition
partition
flash
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710817298.9A
Other languages
Chinese (zh)
Other versions
CN107608911A (en
Inventor
范聪聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN201710817298.9A priority Critical patent/CN107608911B/en
Publication of CN107608911A publication Critical patent/CN107608911A/en
Application granted granted Critical
Publication of CN107608911B publication Critical patent/CN107608911B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a cache data flashing method, which comprises the steps of determining the proportion of data to be flashed of each cache partition in cache equipment when a preset interval is reached; determining the total amount of the required flash resources according to the maximum value of the ratio; determining the flash level of each cache partition according to the data volume of the data to be flashed in each cache partition; for each cache partition, determining corresponding flush resources of the cache partition according to the proportion of the data to be flushed and the flush level of the cache partition; and based on the flash resources distributed to each cache partition, sequentially flashing according to the sequence of the flash level from high to low so as to lower the flash level of each cache partition by one level. By applying the technical scheme provided by the embodiment of the invention, the cache partition with less residual space can be guaranteed to be flushed preferentially, and the condition that the cache partition which is flushed preferentially monopolizes the flushing resource is avoided. The invention also provides a cache data flashing device, equipment and a storage medium, and has corresponding technical effects.

Description

Cache data flashing method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of computer storage, in particular to a cache data flashing method, a cache data flashing device, cache data flashing equipment and a cache data flashing storage medium.
Background
In the storage system, after the data is written into the cache device, the cache device can flush the cache data of the cache device into the back-end disk according to the flush strategy. The cache device usually has a plurality of cache partitions, and the amount of data stored in each cache partition is different, so that the remaining space of each cache partition is different.
In the prior art, a part of cache flashing methods do not allocate the flashing resources for the specific situation of each cache partition, that is, the cache flashing methods treat each cache partition without difference and do not consider the priority of each cache partition, which often results in the phenomenon that the residual space of a part of cache partitions is sufficient and the residual space of a part of cache partitions is too little.
In the other cache flushing method, the flushing resource is allocated to the cache partition with the least residual space, and the phenomenon that one cache partition monopolizes the flushing resource of the cache device usually exists in the cache flushing method, so that other cache partitions cannot use the flushing resource of the cache device for a long time.
In addition, in the existing cache flash strategy, data in the cache partition is treated indiscriminately and all the data are processed in the same way.
In summary, how to select an appropriate cache flush policy to ensure balance of the remaining space of each cache partition is a technical problem that needs to be solved urgently by those skilled in the art.
Disclosure of Invention
The invention aims to provide a cache data flashing method, a cache data flashing device, cache data flashing equipment and a cache data flashing storage medium. The cache partition with less residual space can be guaranteed to be preferentially brushed down. Moreover, each cache partition is only flushed to the lower level of the flushing level, so that the condition that the cache partition which is flushed preferentially monopolizes the flushing resource is avoided.
In order to solve the technical problems, the invention provides the following technical scheme:
a method for flushing cache data, the method comprising:
determining the proportion of data to be refreshed of each cache partition in the cache equipment when a preset interval is reached;
determining the total amount of the brushing resources required by all cache partitions according to the maximum value of the data to be brushed;
determining the corresponding flash level of each cache partition according to the data volume of the data to be flashed in each cache partition;
for each cache partition, determining corresponding flush resources allocated to the cache partition from the total amount of the flush resources according to the proportion of the data to be flushed of the cache partition and the flush level;
and based on the corresponding flash resources distributed to each cache partition, sequentially flashing each cache partition according to the sequence from high to low of the flash level, so that the flash level of each cache partition is reduced by one level.
Preferably, the determining the proportion of the data to be flushed in each cache partition of the cache device includes:
and determining the proportion of the data to be flushed in each cache partition in the cache equipment according to all dirty data in the cache partition.
Preferably, the determining the proportion of the data to be flushed in each cache partition of the cache device includes:
and for each cache partition in the cache equipment, determining the proportion of the data to be flushed in the cache partition according to the dirty data with the access heat lower than a preset heat threshold in the cache partition.
Preferably, after determining the corresponding flush level of each cache partition according to the data amount of the data to be flushed in each cache partition, the method further includes:
cache partitions of the same flashing level are arranged in the same flashing chain;
correspondingly, the sequentially flashing each cache partition from high to low according to the flashing level includes:
and according to the sequence of the brushing levels from high to low, brushing the cache partitions from the head of the chain to the tail of the chain of the brushing chain.
Preferably, after the flushing chain sequentially flushes each cache partition from the head of the chain to the tail of the chain according to the sequence of the flushing levels from high to low, the method further includes:
and for each cache partition, re-confirming the flash level of the cache partition, and placing the cache partition into the chain tail of the corresponding flash chain.
A cache data flush apparatus, the apparatus comprising:
the data to be refreshed ratio determining module is used for determining the data to be refreshed ratio of each cache partition in the cache equipment when a preset interval is reached;
the total flash resource amount determining module is used for determining the total flash resource amount required by all the cache partitions according to the maximum of the proportion of the data to be flashed;
the flash level determining module is used for determining the corresponding flash level of each cache partition according to the data volume of the data to be flashed of each cache partition;
the partition flash resource determining module is used for determining corresponding flash resources allocated to each cache partition from the total amount of the flash resources according to the percentage of the data to be flashed of the cache partition and the flash level;
and the flashing module is used for sequentially flashing each cache partition according to the flashing level from high to low based on the corresponding flashing resources distributed to each cache partition, so that the flashing level of each cache partition is reduced by one level.
Preferably, the to-be-flashed data proportion determining module is specifically configured to:
and for each cache partition in the cache equipment, determining the proportion of the data to be flushed in the cache partition according to the dirty data with the access heat lower than a preset heat threshold in the cache partition.
Preferably, the method further comprises the following steps:
the flash chain placing module is used for placing the cache partitions with the same flash level into the same flash chain after determining the corresponding flash levels of the cache partitions according to the data volume of the data to be flashed of each cache partition;
correspondingly, the flash module is specifically configured to:
and according to the sequence of the brushing levels from high to low, brushing the cache partitions from the head of the chain to the tail of the chain of the brushing chain.
A cached data flashing apparatus, the apparatus comprising:
a memory for storing a computer program;
a processor for executing the computer program to implement: determining the proportion of data to be refreshed of each cache partition in the cache equipment when a preset interval is reached; determining the total amount of the brushing resources required by all cache partitions according to the maximum value of the data to be brushed; determining the corresponding flash level of each cache partition according to the data volume of the data to be flashed in each cache partition; for each cache partition, determining corresponding flush resources allocated to the cache partition from the total amount of the flush resources according to the proportion of the data to be flushed of the cache partition and the flush level; and based on the corresponding flash resources distributed to each cache partition, sequentially flashing each cache partition according to the sequence from high to low of the flash level, so that the flash level of each cache partition is reduced by one level.
A computer readable storage medium having stored thereon a cache data flashing program which, when executed by a processor, implements the steps of the cache data flashing method as claimed in any one of the preceding claims.
By applying the technical scheme provided by the embodiment of the invention, the data ratio to be refreshed of each cache partition in the cache equipment is determined when the preset interval is reached; determining the total amount of the brushing resources required by all cache partitions according to the maximum value of the data to be brushed; determining the corresponding flash level of each cache partition according to the data volume of the data to be flashed in each cache partition; for each cache partition, determining corresponding flush resources allocated to the cache partition from the total amount of the flush resources according to the percentage of data to be flushed and the flush level of the cache partition; and based on the corresponding flash resources distributed to each cache partition, sequentially flashing each cache partition according to the sequence from high to low of the flash level, so that the flash level of each cache partition is reduced by one level.
When the cache data is flushed, the corresponding flushing level of each cache partition is determined according to the data volume of the data to be flushed of each cache partition, and the cache partitions are sequentially flushed from high to low according to the flushing level, so that the flushing priorities of different cache partitions are different, and the cache partition with less residual space can be guaranteed to be flushed preferentially. And determining the flash level of each cache partition, and determining corresponding flash resources allocated to the cache partition from the total amount of the flash resources according to the percentage of the data to be flashed of the cache partition and the flash level. When each cache partition is flushed, all data to be flushed of each cache partition is not flushed, but each cache partition is only flushed to a lower level of the flushing level, so that the condition that the cache partition which is flushed preferentially monopolizes the flushing resource is avoided, and the requirement on the total quantity of the flushing resource of the cache equipment can be reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flowchart illustrating an implementation of a cache data flushing method according to the present invention;
FIG. 2 is a schematic structural diagram of a cache data flushing device according to the present invention;
fig. 3 is a schematic structural diagram of a cache data flushing device according to the present invention.
Detailed Description
The core of the invention is to provide a cache data flashing method which can ensure that a cache partition with less residual space is preferentially flushed. Moreover, each cache partition is only flushed to the lower level of the flushing level, so that the condition that the cache partition which is flushed preferentially monopolizes the flushing resource is avoided.
In order that those skilled in the art will better understand the disclosure, the invention will be described in further detail with reference to the accompanying drawings and specific embodiments. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a flowchart of an implementation of a cache data flushing method according to the present invention is applied to a cache device, and the method includes:
s101: and determining the proportion of the data to be flashed of each cache partition in the cache device each time a preset interval is reached.
A timer may be set in the cache device, and the data to be flushed ratio of each cache partition in the cache device is determined each time a preset interval is reached. The preset interval can be set and adjusted according to actual conditions, and the implementation of the invention is not influenced. For example, when the cache data in the cache device is more, and the time required for sequentially flushing each cache partition is longer, the preset interval may be adjusted. The preset interval is adjusted from 5 seconds to 10 seconds as is practical.
For each cache partition, the data amount of the data to be flushed in the cache partition may be calculated, and a proportion of the data to be flushed in the cache partition, that is, a proportion of the data amount of the data to be flushed in the data capacity of the cache partition, may be determined. Generally, each cache partition in the cache device is a cache partition with the same data capacity, and of course, the cache partition may also be a cache partition with different data capacities, and the implementation of the present invention is not affected.
In a specific embodiment of the present invention, the determining the proportion of the data to be flushed in each cache partition in the cache device in step S101 includes:
and determining the proportion of the data to be flushed in each cache partition in the cache equipment according to all dirty data in the cache partition.
In this embodiment, the dirty data in each cache partition may be set as the data to be flushed in the cache partition, the data amount of all the dirty data in the cache partition may be calculated, the data to be flushed may be determined according to the proportion of the data amount of the dirty data in the data capacity of the cache partition, and the data to be flushed may be represented by fullness. For example, the data capacity of one cache partition is 100, wherein the data amount of the dirty data is 60, it may be determined that the percentage of data to be flushed in the cache partition is 60%.
In a specific embodiment of the present invention, the determining the proportion of the data to be flushed in each cache partition in the cache device in step S101 includes:
and for each cache partition in the cache equipment, determining the proportion of the data to be flushed in the cache partition according to the dirty data with the access heat lower than a preset heat threshold in the cache partition.
For each cache partition in the cache device, the data amount of all dirty data in the cache partition may be calculated, the access heat of each dirty data in the cache partition is calculated, dirty data with the access heat lower than a preset heat threshold is determined, and the data proportion to be flushed is determined according to the proportion of the data amount of the dirty data with the access heat lower than the preset heat threshold in the data capacity of the cache partition. By adopting the embodiment of the invention, the data in the cache partition is subjected to differential processing, and the data with high access heat is ensured to be left in the cache partition.
For example, if the data capacity of one cache partition is 100, and the data amount of dirty data is 60, the data amount of dirty data above the preset hot threshold is 11, and the data amount of dirty data below the preset hot threshold is 49, the percentage of data to be flushed in the cache partition may be determined to be 49%. It should be noted that the access heat refers to the number of access times of dirty data in a set time period, the set time period may be set and adjusted according to an actual situation, and the preset heat threshold may also be set and adjusted according to the actual situation, without affecting the implementation of the present invention, for example, the heat threshold may be set to be 1 access time every 10 seconds, and when the number of access times in 10 seconds is higher than one, the access heat is higher than the preset heat threshold. In specific implementation, dirty data with access heat lower than a preset access threshold can be placed into the cache flash linked list, and when the cache data is flashed, the data in the cache flash linked list is used as the data to be flashed in the cache partition, so that the data with the access heat can be ensured to be left in the cache partition. Of course, in an embodiment of the present invention, when the access heat of the dirty data in the cache flush linked list is increased, the dirty data may be moved out of the cache flush linked list. The cache flush list may be represented by the stall _ list.
After determining the percentage of data to be flushed of each cache partition in the cache device, the operation of step S102 may be performed.
S102: and determining the total amount of the brushing resources required by all the cache partitions according to the maximum value of the data to be brushed.
Determining the proportion of the data to be flushed of each cache partition, determining the maximum value of the proportion of the data to be flushed, and determining the total amount of the flushing resources required by all the cache partitions according to the maximum value of the proportion of the data to be flushed. The total amount of the total required flushing resources of all the cache partitions can be represented as resources, and the maximum value of the data to be flushed can be represented as cache _ full. Specifically, determining the total amount of the flash resources required by all the cache partitions in total refers to: in the process of the flash before the next interval comes, the total amount of the flash resources theoretically required by all the cache partitions in the cache device can be distributed from the total flash resources of the cache device according to a proper proportion according to the maximum value of the proportion of the data to be flashed of each cache partition. For example, the cache device has three cache partitions in total, the percentage of data to be flushed of the cache partition a is 80%, the percentage of data to be flushed of the cache partition B is 40%, and the percentage of data to be flushed of the cache partition C is 20%, and then it may be determined that the maximum percentage of data to be flushed is 80%. Generally, the higher the maximum percentage of data to be flushed, the higher the total amount of flushing resources required in total for all cache partitions. For example, the total flush resource of the cache device is 4000, when the maximum value of the percentage of data to be flushed is 10%, the total amount of the required flush resource of all the cache partitions is 30% of 4000 according to a preset rule, that is, 1200, and when the maximum value of the percentage of data to be flushed is 80%, the total amount of the required flush resource of all the cache partitions is 90% of 4000 according to a preset rule, that is, 3600. Of course, the specific corresponding relationship between the maximum value of the percentage of the data to be flushed and the total amount of the flushing resources required by all the cache partitions can be set and adjusted according to actual conditions, and the implementation of the method is not affected.
After determining the total amount of the flash resources required in all the cache partitions, the operation of step S103 may be performed.
S103: and determining the corresponding flash level of each cache partition according to the data volume of the data to be flashed of each cache partition.
For each cache partition, the data amount of the data to be flushed for the cache partition may be calculated. For example, when all dirty data is used as the data to be flushed and the data to be flushed is determined to be the data amount of the dirty data according to all dirty data, in step S103, the data amount of the data to be flushed may be determined to be the data amount of the dirty data, and when dirty data with low access heat is used as the data to be flushed and the data to be flushed is determined to be the data amount of the dirty data with low access heat, in step S103, the data amount of the data to be flushed may be determined to be the data amount of the dirty data with low access heat. And determining the corresponding flash level of each cache partition according to the data volume of the data to be flashed of each cache partition. For example, when the data amount of the data to be flushed in the cache partition is greater than or equal to 500 and less than 1000, the flushing level of the cache partition is determined to be level 5, and when the data amount of the data to be flushed is greater than or equal to 200 and less than 500, the flushing level of the cache partition is determined to be level 4. The flash level can be represented as rank, and it should be noted that, the requirement of each rank level on the data amount to be flashed, the number of rank levels, and the like can be set and adjusted according to the actual situation, and the implementation of the present invention is not affected. For example, rank levels are set to a total of 5 levels, and the lower limit of each rank level is arranged in equal difference.
S104: and for each cache partition, determining corresponding flash resources allocated to the cache partition from the total amount of the flash resources according to the percentage of the data to be flashed of the cache partition and the flash level.
For each cache partition, according to the percentage of data to be flushed and the flushing level of the cache partition, and according to a preset proportion, determining corresponding flushing resources allocated to the cache partition from the total quantity of the flushing resources. Generally, the higher the data to be flushed is, the higher the flushing level is, and the more flushing resources are allocated by the cache partition from the total amount of flushing resources. It should be noted that the flush resource allocated to the cache partition should decrease the level of the flush of the cache partition by one level after the flush is completed.
S105: and based on the corresponding flash resources distributed to each cache partition, sequentially flashing each cache partition according to the sequence from high to low of the flash level, so that the flash level of each cache partition is reduced by one level.
After the flash level of each partition is determined, based on the corresponding flash resources allocated to each cache partition, the cache partitions are sequentially flashed from high to low in order of the flash level, so that the flash level of each cache partition is reduced by one level. For example, the cache data of the nth cache partition is being flushed, the percentage of data to be flushed of the cache partition is 30%, the flushing level is 3, the total amount of the flushing resources required in all the cache partitions determined in step S102 is 300, and the flushing resources of the cache partition are allocated from the total amount of the 300 flushing resources according to the percentage of data to be flushed of the cache partition and the flushing level, so that the flushing level of the cache partition becomes 2. The flush resource allocated by the nth cache partition from the total amount of resources may be denoted as resources _ n.
The cache partitions are sequentially flushed from high to low according to the flushing level, and generally, each cache partition is flushed to the flushing level of the next lower level before the next preset interval comes. It should be noted that, in implementation, when the next preset interval comes, a situation may occur that the part of the cache partitions with lower flushing levels does not start the flushing of the data, and the remaining flushing resources of the cache device are sufficient, and this situation may consider whether the interval is preset to be shorter, and may appropriately adjust the preset interval.
By applying the technical scheme provided by the embodiment of the invention, the data ratio to be refreshed of each cache partition in the cache equipment is determined when the preset interval is reached; determining the total amount of the brushing resources required by all cache partitions according to the maximum value of the data to be brushed; determining the corresponding flash level of each cache partition according to the data volume of the data to be flashed in each cache partition; for each cache partition, determining corresponding flush resources allocated to the cache partition from the total amount of the flush resources according to the percentage of data to be flushed and the flush level of the cache partition; and based on the corresponding flash resources distributed to each cache partition, sequentially flashing each cache partition according to the sequence from high to low of the flash level, so that the flash level of each cache partition is reduced by one level.
When the cache data is flushed, the corresponding flushing level of each cache partition is determined according to the data volume of the data to be flushed of each cache partition, and the cache partitions are sequentially flushed from high to low according to the flushing level, so that the flushing priorities of different cache partitions are different, and the cache partition with less residual space can be guaranteed to be flushed preferentially. And determining the flash level of each cache partition, and determining corresponding flash resources allocated to the cache partition from the total amount of the flash resources according to the percentage of the data to be flashed of the cache partition and the flash level. When each cache partition is flushed, all data to be flushed of each cache partition is not flushed, but each cache partition is only flushed to a lower level of the flushing level, so that the condition that the cache partition which is flushed preferentially monopolizes the flushing resource is avoided, and the requirement on the total quantity of the flushing resource of the cache equipment can be reduced.
In one embodiment of the present invention, after step S103, the method further includes:
cache partitions of the same flashing level are arranged in the same flashing chain;
correspondingly, the sequentially flashing each cache partition according to the flashing level from high to low in step S105 includes:
and according to the sequence of the brushing levels from high to low, brushing the cache partitions from the head of the chain to the tail of the chain of the brushing chain.
And determining the corresponding flash level of each cache partition according to the data volume of the data to be flashed of each cache partition. This embodiment of the invention may be used when the flush level of two or more cache partitions is the same. And putting the cache partitions of the same flash level into the same flash chain. The flash chain represents a queue formed by cache partitions of the same flash level. When the cache data is flushed, for a flush chain, namely each cache partition in the same flush level, each cache partition can be sequentially flushed from the head of the chain to the tail of the chain.
In a specific embodiment of the present invention, after sequentially flushing each cache partition from a chain head to a chain tail of a flush chain according to an order of a flush level from high to low, the method further includes: and for each cache partition, re-confirming the flash level of the cache partition, and placing the cache partition into the chain tail of the corresponding flash chain. Of course, all the flush chains may be cleared after each cache partition is sequentially flushed, and each flush chain is regenerated after the next preset interval comes, without affecting the implementation of the present invention.
It should be noted that, in a specific implementation, factors such as a low total amount of the flash resources of the cache device, insufficient remaining flash resources, and a large amount of data to be flashed in each cache partition may cause that some cache partitions with low flash levels do not start the flash of data, and the remaining flash resources of the cache device are already used up, which may increase the total amount of the flash resources of the cache device. Of course, in a specific embodiment of the present invention, the flush resources allocated to each cache partition may also be adjusted, so that each cache partition can use the flush resources of the cache device.
For example, when the cache data is flushed in the first cycle, the flush resource allocated from the total resource amount by the 7 th cache partition is 80, and the remaining flush resources are only 50, and neither the 8 th cache partition nor the 9 th cache partition can be allocated to the respective needed flush resource, when the second cycle is performed, for the first 6 cache partitions, after the 6 cache partitions are allocated with the respective needed flush resources, according to the flush resources allocated to the 6 cache partitions in the first cycle, the respective flush resources allocated to the second cycle of the 6 cache partitions are deducted according to the set proportion, and the deducted flush resources may be reserved to the 7 th, 8 th, and 9 th cache partitions. Of course, if the flush level of each cache partition is sequentially flushed to the lower level in the second cycle, in the third cycle, the flush process may be executed normally for each cache partition without deduction of the flush resources.
Corresponding to the above method embodiment, an embodiment of the present invention further provides a cache data flushing device, and a cache data flushing device described below and a cache data flushing method described above may be referred to in a corresponding manner.
Referring to fig. 2, a schematic structural diagram of a cache data flushing device according to the present invention is shown, where the device is applied to a cache device, and may include the following modules:
a data to be flushed ratio determining module 201, configured to determine a data to be flushed ratio of each cache partition in the cache device each time a preset interval is reached;
a total flush resource amount determining module 202, configured to determine, according to a maximum value of a data to be flushed, a total amount of flush resources required by all cache partitions in total;
the flash level determining module 203 is configured to determine a corresponding flash level of each cache partition according to the data amount of the data to be flashed in each cache partition;
a partition flush resource determining module 204, configured to determine, for each cache partition, a corresponding flush resource allocated to the cache partition from the total amount of the flush resources according to the percentage of data to be flushed of the cache partition and the flush level;
the flushing module 205 is configured to flush, based on the corresponding flushing resources allocated to each cache partition, each cache partition in sequence from high to low according to the flushing level, so that the flushing level of each cache partition is reduced by one level.
By applying the device provided by the embodiment of the invention, the data ratio to be flashed of each cache partition in the cache equipment is determined when the preset interval is reached; determining the total amount of the brushing resources required by all cache partitions according to the maximum value of the data to be brushed; determining the corresponding flash level of each cache partition according to the data volume of the data to be flashed in each cache partition; for each cache partition, determining corresponding flush resources allocated to the cache partition from the total amount of the flush resources according to the percentage of data to be flushed and the flush level of the cache partition; and based on the corresponding flash resources distributed to each cache partition, sequentially flashing each cache partition according to the sequence from high to low of the flash level, so that the flash level of each cache partition is reduced by one level.
When the cache data is flushed, the corresponding flushing level of each cache partition is determined according to the data volume of the data to be flushed of each cache partition, and the cache partitions are sequentially flushed from high to low according to the flushing level, so that the flushing priorities of different cache partitions are different, and the cache partition with less residual space can be guaranteed to be flushed preferentially. And determining the flash level of each cache partition, and determining corresponding flash resources allocated to the cache partition from the total amount of the flash resources according to the percentage of the data to be flashed of the cache partition and the flash level. When each cache partition is flushed, all data to be flushed of each cache partition is not flushed, but each cache partition is only flushed to a lower level of the flushing level, so that the condition that the cache partition which is flushed preferentially monopolizes the flushing resource is avoided, and the requirement on the total quantity of the flushing resource of the cache equipment can be reduced.
In an embodiment of the present invention, the to-be-flashed data proportion determining module 201 is specifically configured to:
and determining the proportion of the data to be flushed in each cache partition in the cache equipment according to all dirty data in the cache partition.
In an embodiment of the present invention, the to-be-flashed data proportion determining module 201 is specifically configured to:
and for each cache partition in the cache equipment, determining the proportion of the data to be flushed in the cache partition according to the dirty data with the access heat lower than a preset heat threshold in the cache partition.
In one embodiment of the present invention, the method further comprises:
the flash chain placing module is used for placing the cache partitions with the same flash level into the same flash chain after determining the corresponding flash levels of the cache partitions according to the data volume of the data to be flashed of each cache partition;
correspondingly, the flash module 205 is specifically configured to:
and according to the sequence of the brushing levels from high to low, brushing the cache partitions from the head of the chain to the tail of the chain of the brushing chain.
In an embodiment of the present invention, the apparatus further includes a flash chain moving module, specifically configured to:
after the cache partitions are sequentially flushed from the chain head to the chain tail of the flushing chain according to the sequence of the flushing levels from high to low, the flushing level of each cache partition is confirmed again for each cache partition, and the cache partition is placed into the chain tail of the corresponding flushing chain.
Corresponding to the above method embodiment, an embodiment of the present invention further provides a cache data flushing device, and the cache data flushing device described below and the cache data flushing method and apparatus described above may be referred to in a corresponding manner.
Referring to fig. 3, a schematic structural diagram of a cache data flushing device according to the present invention is shown, where the device includes:
a memory 301 for storing a computer program;
a processor 302 for executing a computer program to implement: determining the proportion of data to be refreshed of each cache partition in the cache equipment when a preset interval is reached; determining the total amount of the brushing resources required by all cache partitions according to the maximum value of the data to be brushed; determining the corresponding flash level of each cache partition according to the data volume of the data to be flashed in each cache partition; for each cache partition, determining corresponding flush resources allocated to the cache partition from the total amount of the flush resources according to the percentage of data to be flushed and the flush level of the cache partition; and based on the corresponding flash resources distributed to each cache partition, sequentially flashing each cache partition according to the sequence from high to low of the flash level, so that the flash level of each cache partition is reduced by one level.
Corresponding to the above method embodiment, an embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium described below and the above-described cache data flashing method, apparatus, and device may be referred to correspondingly, and the computer-readable storage medium stores a cache data flashing program, where the cache data flashing program is executed by a processor to implement the above-described steps of the cache data flashing method, and a description of the steps is not repeated here.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The principle and the implementation of the present invention are explained in the present application by using specific examples, and the above description of the embodiments is only used to help understanding the technical solution and the core idea of the present invention. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.

Claims (7)

1. A cache data flashing method is characterized by comprising the following steps:
determining the proportion of data to be refreshed of each cache partition in the cache equipment when a preset interval is reached;
determining the total amount of the brushing resources required by all cache partitions according to the maximum value of the data to be brushed;
determining the corresponding flash level of each cache partition according to the data volume of the data to be flashed in each cache partition;
for each cache partition, determining corresponding flush resources allocated to the cache partition from the total amount of the flush resources according to the proportion of the data to be flushed of the cache partition and the flush level;
based on the corresponding flash resources distributed to each cache partition, sequentially flashing each cache partition according to the sequence of the flash level from high to low so as to lower the flash level of each cache partition by one level;
the determining the proportion of the data to be flushed of each cache partition in the cache device includes:
and for each cache partition in the cache equipment, determining the proportion of the data to be flushed in the cache partition according to the dirty data with the access heat lower than a preset heat threshold in the cache partition.
2. The method according to claim 1, wherein after determining the corresponding flush level of each cache partition according to the data amount of the data to be flushed in each cache partition, the method further comprises:
cache partitions of the same flashing level are arranged in the same flashing chain;
correspondingly, the sequentially flashing each cache partition from high to low according to the flashing level includes:
and according to the sequence of the brushing levels from high to low, brushing the cache partitions from the head of the chain to the tail of the chain of the brushing chain.
3. The method of claim 2, wherein after the flushing the cache partitions sequentially from the head of the chain to the tail of the chain in the sequence from the high to the low in the flushing level, the method further comprises:
and for each cache partition, re-confirming the flash level of the cache partition, and placing the cache partition into the chain tail of the corresponding flash chain.
4. A cache data flush apparatus, comprising:
the data to be refreshed ratio determining module is used for determining the data to be refreshed ratio of each cache partition in the cache equipment when a preset interval is reached;
the total flash resource amount determining module is used for determining the total flash resource amount required by all the cache partitions according to the maximum of the proportion of the data to be flashed;
the flash level determining module is used for determining the corresponding flash level of each cache partition according to the data volume of the data to be flashed of each cache partition;
the partition flash resource determining module is used for determining corresponding flash resources allocated to each cache partition from the total amount of the flash resources according to the percentage of the data to be flashed of the cache partition and the flash level;
the flash module is used for sequentially flashing each cache partition according to the flash level from high to low based on the corresponding flash resources allocated to each cache partition, so that the flash level of each cache partition is reduced by one level;
the to-be-flashed data proportion determining module is specifically configured to:
and for each cache partition in the cache equipment, determining the proportion of the data to be flushed in the cache partition according to the dirty data with the access heat lower than a preset heat threshold in the cache partition.
5. The apparatus of claim 4, further comprising:
the flash chain placing module is used for placing the cache partitions with the same flash level into the same flash chain after determining the corresponding flash levels of the cache partitions according to the data volume of the data to be flashed of each cache partition;
correspondingly, the flash module is specifically configured to:
and according to the sequence of the brushing levels from high to low, brushing the cache partitions from the head of the chain to the tail of the chain of the brushing chain.
6. A cached data flashing apparatus, comprising:
a memory for storing a computer program;
a processor for executing the computer program to implement: determining the proportion of data to be refreshed of each cache partition in the cache equipment when a preset interval is reached; determining the total amount of the brushing resources required by all cache partitions according to the maximum value of the data to be brushed; determining the corresponding flash level of each cache partition according to the data volume of the data to be flashed in each cache partition; for each cache partition, determining corresponding flush resources allocated to the cache partition from the total amount of the flush resources according to the proportion of the data to be flushed of the cache partition and the flush level; based on the corresponding flash resources distributed to each cache partition, sequentially flashing each cache partition according to the sequence of the flash level from high to low so as to lower the flash level of each cache partition by one level;
the determining the proportion of the data to be flushed of each cache partition in the cache device includes:
and for each cache partition in the cache equipment, determining the proportion of the data to be flushed in the cache partition according to the dirty data with the access heat lower than a preset heat threshold in the cache partition.
7. A computer-readable storage medium, having stored thereon a cache data flashing program, which when executed by a processor implements the steps of the cache data flashing method of any of claims 1 to 3.
CN201710817298.9A 2017-09-12 2017-09-12 Cache data flashing method, device, equipment and storage medium Active CN107608911B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710817298.9A CN107608911B (en) 2017-09-12 2017-09-12 Cache data flashing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710817298.9A CN107608911B (en) 2017-09-12 2017-09-12 Cache data flashing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN107608911A CN107608911A (en) 2018-01-19
CN107608911B true CN107608911B (en) 2020-09-22

Family

ID=61063597

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710817298.9A Active CN107608911B (en) 2017-09-12 2017-09-12 Cache data flashing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN107608911B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110275670B (en) * 2018-03-16 2021-02-05 华为技术有限公司 Method and device for controlling data flow in storage device, storage device and storage medium
CN109032517B (en) * 2018-07-19 2021-06-29 广东浪潮大数据研究有限公司 Data destaging method and device and computer readable storage medium
CN109614045B (en) * 2018-12-06 2022-04-15 广东浪潮大数据研究有限公司 Metadata dropping method and device and related equipment
CN109614344A (en) * 2018-12-12 2019-04-12 浪潮(北京)电子信息产业有限公司 A kind of spatial cache recovery method, device, equipment and storage system
CN109783023A (en) * 2019-01-04 2019-05-21 平安科技(深圳)有限公司 The method and relevant apparatus brushed under a kind of data
CN113625966B (en) * 2021-07-25 2024-02-13 济南浪潮数据技术有限公司 Data brushing method, system, equipment and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120297143A1 (en) * 2011-05-18 2012-11-22 Canon Kabushiki Kaisha Data supply device, cache device, data supply method, and cache method
CN105630638A (en) * 2014-10-31 2016-06-01 国际商业机器公司 Equipment and method for distributing cache for disk array
CN106649139A (en) * 2016-12-29 2017-05-10 北京奇虎科技有限公司 Data eliminating method and device based on multiple caches

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120297143A1 (en) * 2011-05-18 2012-11-22 Canon Kabushiki Kaisha Data supply device, cache device, data supply method, and cache method
CN105630638A (en) * 2014-10-31 2016-06-01 国际商业机器公司 Equipment and method for distributing cache for disk array
CN106649139A (en) * 2016-12-29 2017-05-10 北京奇虎科技有限公司 Data eliminating method and device based on multiple caches

Also Published As

Publication number Publication date
CN107608911A (en) 2018-01-19

Similar Documents

Publication Publication Date Title
CN107608911B (en) Cache data flashing method, device, equipment and storage medium
US20160048345A1 (en) Allocation of read/write channels for storage devices
DE102005028827A1 (en) Flash memory device and method for defect block treatment
CN106233269A (en) Fine granulation bandwidth supply in Memory Controller
CN113051075B (en) Kubernetes intelligent capacity expansion method and device
JP2019200833A5 (en)
CN107450985B (en) Memory management method, mobile terminal and storage medium
CN105404595B (en) Buffer memory management method and device
CN111142803B (en) Metadata disk refreshing method, device, equipment and medium
CN105306385B (en) The control method and device of downlink network bandwidth
CN113391914A (en) Task scheduling method and device
CN106254566A (en) A kind of data download processing method and device
CN111581010A (en) Read operation processing method, device and equipment and readable storage medium
CN109491611B (en) Metadata dropping method, device and equipment
CN111176570B (en) Thick backup roll creating method, device, equipment and medium
CN106557430B (en) A kind of data cached brush method and device
JP6421635B2 (en) Electronic control device and memory rewriting method
CN104751883B (en) A kind of programmed method of nonvolatile memory
CN109450724A (en) A kind of test method and relevant apparatus of NFS internal memory optimization function
CN106952662B (en) The memory device of memory device method for refreshing and adjustable refresh operation frequency
CN108132756B (en) Method and device for refreshing storage array
CN113126923B (en) Additional uploading method, system, equipment and storage medium for distributed object storage
CN105242878B (en) A kind of method and device of QoS controls
CN110908604B (en) Request processing delay adjusting method and device, electronic equipment and storage medium
CN109992213B (en) Data deleting method, system, device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20200826

Address after: 215100 No. 1 Guanpu Road, Guoxiang Street, Wuzhong Economic Development Zone, Suzhou City, Jiangsu Province

Applicant after: SUZHOU LANGCHAO INTELLIGENT TECHNOLOGY Co.,Ltd.

Address before: 450018 Henan province Zheng Dong New District of Zhengzhou City Xinyi Road No. 278 16 floor room 1601

Applicant before: ZHENGZHOU YUNHAI INFORMATION TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant