CN106649139B - Data elimination method and device based on multiple caches - Google Patents

Data elimination method and device based on multiple caches Download PDF

Info

Publication number
CN106649139B
CN106649139B CN201611246005.8A CN201611246005A CN106649139B CN 106649139 B CN106649139 B CN 106649139B CN 201611246005 A CN201611246005 A CN 201611246005A CN 106649139 B CN106649139 B CN 106649139B
Authority
CN
China
Prior art keywords
cache
thread pool
data
level
threads
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611246005.8A
Other languages
Chinese (zh)
Other versions
CN106649139A (en
Inventor
王文铎
陈宗志
彭信东
王康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN201611246005.8A priority Critical patent/CN106649139B/en
Publication of CN106649139A publication Critical patent/CN106649139A/en
Priority to PCT/CN2017/115616 priority patent/WO2018121242A1/en
Application granted granted Critical
Publication of CN106649139B publication Critical patent/CN106649139B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0811Multiuser, multiprocessor or multiprocessing cache systems with multilevel cache hierarchies

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a data elimination method and a data elimination device based on multiple caches, which relate to the technical field of computers, and comprise the following steps: dividing a plurality of cache levels according to a preset level division rule, and respectively creating a matched thread pool for each cache level; each thread pool comprises a plurality of threads; respectively scanning each cache by using a plurality of threads in each thread pool, and determining the cache level of each cache according to the scanning result and a level division rule; and utilizing a plurality of threads in each thread pool to eliminate the data in the cache with the cache level matched with the thread pool. The invention adopts a multithreading processing mode, thereby greatly improving the efficiency of eliminating the data in the cache.

Description

Data elimination method and device based on multiple caches
Technical Field
The invention relates to the technical field of computers, in particular to a data elimination method and device based on multiple caches.
Background
Cache (Cache) is an important technology used for solving the problem of speed mismatch between high-speed and low-speed devices, is widely applied to various fields such as storage systems, databases, web servers, processors, file systems, disk systems and the like, and can reduce application response time and improve efficiency. However, storage media used for implementing the Cache technology, such as RAM, SSD, etc., have higher performance and are expensive, and the capacity of the Cache is limited due to the cost performance, so that the Cache space needs to be managed effectively, and various Cache elimination algorithms appear, for example: least Recently Used (LRU) elimination algorithm; least Frequently Used (LFU) elimination algorithm recently; most Recently, the (Most Recently Used, MRU) elimination algorithm is Used; adaptive Replacement Cache (ARC for short) elimination algorithm and the like.
However, in the process of implementing the present invention, the inventors found that at least the following problems exist in the prior art: the elimination algorithm in the prior art is generally a single-thread processing mode, the processing efficiency is low, and the low processing efficiency sometimes causes that the limited cache space cannot be emptied in time after being used up and the subsequent data cannot be stored in time.
Disclosure of Invention
In view of the above, the present invention is proposed to provide a data eviction method based on multiple caches and a corresponding device, which overcome or at least partially solve the above problems.
According to one aspect of the invention, a data elimination method based on multiple caches is provided, which comprises the following steps: dividing a plurality of cache levels according to a preset level division rule, and respectively creating a matched thread pool for each cache level; each thread pool comprises a plurality of threads; respectively scanning each cache by using a plurality of threads in each thread pool, and determining the cache level of each cache according to the scanning result and a level division rule; and utilizing a plurality of threads in each thread pool to eliminate the data in the cache with the cache level matched with the thread pool.
Preferably, the method further comprises: respectively setting corresponding weight values for each thread pool, and setting the number of threads contained in each thread pool according to the weight values of each thread pool; the larger the weight value of the thread pool is, the larger the number of threads contained in the thread pool is.
Preferably, the step of setting the number of threads included in each thread pool according to the weight value of each thread pool specifically includes: periodically obtaining the scanning result of each thread pool, and determining the cache quantity corresponding to each cache level according to the scanning result; adjusting the weight value of each thread pool according to the cache number corresponding to each cache level, and adjusting the number of threads contained in each thread pool according to the adjusted weight value of each thread pool; the more the cache number corresponding to the cache level is, the larger the weight value of the thread pool matched with the cache level is.
Preferably, the step of setting the corresponding weight values for the thread pools further includes: setting a weight value corresponding to each thread pool according to the cache level matched with the thread pool; the higher the cache level matched with the thread pool is, the larger the weight value of the thread pool is.
Preferably, the preset ranking rule comprises: and dividing the cache level according to the ratio of the residual storage space to the total storage space of the cache, wherein the larger the ratio of the residual storage space to the total storage space is, the higher the cache level is.
Preferably, the step of eliminating the data in the cache whose cache level matches the thread pool by using a plurality of threads in each thread pool specifically includes: and calculating the temperature attribute value of each data in the cache according to the total write-in times of each data in the cache and a preset temperature attribute calculation rule, and determining the elimination sequence of each data in the cache according to the temperature attribute value.
Preferably, the thread pools run in parallel with each other.
According to another aspect of the present invention, there is provided a data elimination apparatus based on multiple caches, including: the system comprises a dividing module, a processing module and a processing module, wherein the dividing module is used for dividing a plurality of cache levels according to a preset level dividing rule and respectively establishing a matched thread pool for each cache level; each thread pool comprises a plurality of threads; the scanning module is used for respectively scanning each cache by utilizing a plurality of threads in each thread pool and determining the cache level of each cache according to the scanning result and the level division rule; and the elimination module is used for eliminating the data in the cache with the cache level matched with the thread pool by utilizing the threads in each thread pool.
Preferably, the apparatus further comprises: the weight module is used for setting corresponding weight values for the thread pools respectively and setting the number of threads contained in each thread pool according to the weight values of the thread pools; the larger the weight value of the thread pool is, the larger the number of threads contained in the thread pool is.
Preferably, the weighting module is specifically configured to: periodically obtaining the scanning result of each thread pool, and determining the cache quantity corresponding to each cache level according to the scanning result; adjusting the weight value of each thread pool according to the cache number corresponding to each cache level, and adjusting the number of threads contained in each thread pool according to the adjusted weight value of each thread pool; the more the cache number corresponding to the cache level is, the larger the weight value of the thread pool matched with the cache level is.
Preferably, the weighting module is further configured to: setting a weight value corresponding to each thread pool according to the cache level matched with the thread pool; the higher the cache level matched with the thread pool is, the larger the weight value of the thread pool is.
Preferably, the preset ranking rule comprises: and dividing the cache level according to the ratio of the residual storage space to the total storage space of the cache, wherein the larger the ratio of the residual storage space to the total storage space is, the higher the cache level is.
Preferably, the elimination module is specifically configured to: and calculating the temperature attribute value of each data in the cache according to the total write-in times of each data in the cache and a preset temperature attribute calculation rule, and determining the elimination sequence of each data in the cache according to the temperature attribute value.
Preferably, the thread pools run in parallel with each other.
According to the data elimination method and device based on multiple caches, the multiple cache levels can be divided according to a preset level division rule, and matched thread pools are respectively established for the cache levels; respectively scanning each cache by using a plurality of threads in each thread pool, and determining the cache level of each cache according to the scanning result and a level division rule; and utilizing a plurality of threads in each thread pool to eliminate the data in the cache with the cache level matched with the thread pool. Therefore, the cache is divided into a plurality of cache levels, and the corresponding thread pools are respectively established for the cache levels, so that the number of threads in each thread pool can be better adjusted according to the cache levels; and the data elimination processing efficiency is greatly improved by a parallel processing mode of a plurality of thread pools.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a schematic flowchart of a data eviction method based on multiple caches according to a first embodiment of the present invention;
fig. 2 is a schematic flowchart of a data elimination method based on multiple caches according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of a data elimination apparatus based on multiple caches according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a data elimination apparatus based on multiple caches according to a fourth embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Example one
Fig. 1 is a schematic flowchart illustrating a data eviction method based on multiple caches according to an embodiment of the present invention, as shown in the figure, the method includes:
step S110: and dividing a plurality of cache levels according to a preset level division rule, and respectively establishing a matched thread pool for each cache level.
The preset grade division rule is used for dividing each cache into different grades according to different use conditions of the cache, and each cache in the same grade has similar use conditions. The rating is manually defined by the skilled person. For the specific content of the preset rating rule, the embodiment of the present invention does not specifically limit this, and those skilled in the art can flexibly set the rating rule according to the actual situation.
In order to improve the processing efficiency of data elimination, matched thread pools are respectively created for each cache level, and each thread pool comprises a plurality of threads. And a plurality of threads in each thread pool are used for the data elimination processing of the cache of the corresponding level. Because the use conditions of the caches in different levels are different, in order to optimize the resource configuration as much as possible, the number of threads in the thread pools corresponding to different levels can also be different.
The thread pool technology is adopted because if a corresponding thread is set for each cache to process, a large amount of system resources are consumed, and the real operability is not realized.
Step S120: and respectively scanning each cache by using a plurality of threads in each thread pool, and determining the cache level of each cache according to the scanning result and the level division rule.
When a plurality of threads eliminate data of all caches, if the matching relationship between the threads and the caches is not limited, the two threads may process the same cache at the same time, and at this time, the two threads may conflict with each other, resulting in a series of problems. Therefore, the embodiment of the invention effectively avoids the conflict situation and optimizes the working process by carrying out grade division on all the caches and specifying the corresponding processing relation between each thread and the caches in different grades.
Specifically, the multiple threads in each thread pool are used for scanning each cache respectively, and the cache level is determined for each scanned cache according to the scanning result and the level division rule for subsequent targeted processing.
Step S130: and utilizing a plurality of threads in each thread pool to eliminate the data in the cache with the cache level matched with the thread pool.
Specifically, according to the cache levels determined in step S120, data elimination processing is performed on the cache having the corresponding cache levels by using a plurality of threads in the thread pool matched with the respective cache levels. For the specific method of data elimination, the embodiment of the present invention does not specifically limit this, and those skilled in the art can flexibly set the method according to the actual situation.
Therefore, the data elimination method based on multiple caches provided by the embodiment of the invention can divide multiple cache levels according to the preset level division rule, and respectively create a matched thread pool for each cache level; respectively scanning each cache by using a plurality of threads in each thread pool, and determining the cache level of each cache according to the scanning result and a level division rule; and utilizing a plurality of threads in each thread pool to eliminate the data in the cache with the cache level matched with the thread pool. Therefore, the cache is divided into a plurality of cache levels, and the corresponding thread pools are respectively established for the cache levels, so that the number of threads in each thread pool can be better adjusted according to the cache levels; and the data elimination processing efficiency is greatly improved by a parallel processing mode of a plurality of thread pools.
Example two
Fig. 2 is a schematic flowchart illustrating a data eviction method based on multiple caches according to a second embodiment of the present invention, where as shown in the figure, the method includes:
step S210: and dividing a plurality of cache levels according to a preset level division rule, and respectively establishing a matched thread pool for each cache level.
The preset grade division rule is used for dividing each cache into different grades according to different use conditions of the cache, and each cache in the same grade has similar use conditions. In an embodiment of the present invention, the hierarchical division rule includes: dividing the cache level according to the ratio between the residual storage space and the total storage space of the cache, wherein the larger the ratio between the residual storage space and the total storage space is, the higher the cache level is; the smaller the ratio between the remaining storage space and the total storage space, the lower the cache level. For example, assuming that the cache levels are divided into three levels, a HIGH (HIGH) level, a LOW (LOW) level, and an IDLE (IDLE) level, respectively, wherein a cache in which a ratio between a remaining storage space of the cache and a total storage space is more than 60% is determined as the HIGH level; determining a cache with a ratio between the remaining storage space and the total storage space of the cache between 30% and 60% as a LOW level; and determining the buffer memory with the ratio of the residual storage space of the buffer memory to the total storage space below 30 percent as the IDLE level.
In order to improve the processing efficiency of data elimination, matched thread pools are respectively created for each cache level, and each thread pool comprises a plurality of threads. And a plurality of threads in each thread pool are used for the data elimination processing of the cache of the corresponding level. Because the use conditions of the caches in different levels are different, in order to optimize the resource configuration as much as possible, the number of threads in the thread pools corresponding to different levels is different.
Step S220: and respectively setting corresponding weight values for each thread pool, and setting the number of threads contained in each thread pool according to the weight values of each thread pool.
The specific setting method of the weight value may be that, for each thread pool, the weight value corresponding to the thread pool is set according to the cache level matched with the thread pool, wherein the higher the cache level matched with the thread pool is, the larger the weight value of the thread pool is; conversely, the lower the cache level matching the thread pool, the smaller the weight value of the thread pool. The larger the weight value of the thread pool is, the more the number of threads contained in the thread pool is; the smaller the weight value of the thread pool, the smaller the number of threads contained in the thread pool. Thus, the number of threads contained within each thread pool is dynamically variable.
Step S230: and respectively scanning each cache by using a plurality of threads in each thread pool, and determining the cache level of each cache according to the scanning result and the level division rule.
When a plurality of threads eliminate all caches, if the matching relationship between the threads and the caches is not limited, two threads process the same cache, and at the moment, the two threads conflict with each other, which leads to a series of problems. Therefore, the embodiment of the invention effectively avoids the conflict situation and optimizes the working process by carrying out grade division on all the caches and specifying the corresponding processing relation between each thread and the caches in different grades.
Specifically, the multiple threads in each thread pool are used for scanning each cache respectively, and the cache level is determined for each scanned cache according to the scanning result and the level division rule for subsequent targeted processing.
Correspondingly, the method for setting the weight value of the thread pool may further include: periodically obtaining the scanning result of each thread pool, and determining the cache quantity corresponding to each cache level according to the scanning result; and then, adjusting the weight value of each thread pool according to the cache quantity corresponding to each cache level, and adjusting the quantity of threads contained in each thread pool according to the adjusted weight value of each thread pool. The more the cache quantity corresponding to the cache level is, the larger the weight value of the thread pool matched with the cache level is; conversely, the smaller the cache number corresponding to a cache level, the smaller the weight value of the thread pool matching the cache level. The weighted value of the thread pool is determined according to the number of the caches, so that the number of the threads contained in each thread pool is determined, the number of the threads in each thread pool can accurately meet the processing operation of each cache in the corresponding cache level, resources are reasonably used, and the cost is saved.
In other embodiments, the weight value setting methods of the thread pools provided in step S220 and step S230 may be adopted in combination, so as to set a more reasonable weight value of the thread pool. In addition, the weight value of the thread pool can be further determined according to various factors such as the type and the importance degree of the cache of the corresponding level.
Step S240: and utilizing a plurality of threads in each thread pool to eliminate the data in the cache with the cache level matched with the thread pool.
Specifically, according to the cache levels determined in the above steps, data elimination processing is performed on the cache with the corresponding cache levels by using a plurality of threads in the thread pool matched with each cache level. Each thread pool may only process the cache of one cache level, for example, in step S210, the cache is divided into three levels, i.e., a HIGH level, a LOW level and an IDLE level, so only three thread pools are needed to correspond to the cache. Specifically, thread pool 1 corresponds to a HIGH level, thread pool 2 corresponds to a LOW level, and thread pool 3 corresponds to an IDLE level, in which case all threads in thread pool 1 only process all caches in the HIGH level, all threads in thread pool 2 only process all caches in the LOW level, and all threads in thread pool 3 only process all caches in the IDLE level. Of course, each thread pool may also be used to handle multiple cache levels of cache when there are more cache levels. For example, when the cache level includes six levels, the processing may be performed by three thread pools, each thread pool processing two levels of cache.
In a word, the cache scanning and data elimination can be more flexibly realized through the cache level division and the application of the thread pool technology. In addition, the steps S230 and S240 may be repeatedly executed for a plurality of times, for example, the step S230 may be executed once at a preset first time interval, and the step S240 may be executed once at a preset second time interval. The first time interval and the second time interval may be equal or different. The first time interval and the second time interval may be fixed values or dynamically changing values. For example, the first time interval may be dynamically adjusted according to the scanning result: when the cache number of the HIGH level in the scanning result is more, reducing a first time interval; the first time interval is increased when the number of HIGH level buffers in the scan result is small. In addition, in each execution process of step S240, each thread pool may perform the eviction operation on the cache of the corresponding level according to the same execution cycle, or may perform the eviction operation on the cache of the corresponding level according to different execution cycles. For example, a thread pool for processing a HIGH level cache may perform data eviction operations according to a shorter execution cycle to prevent insufficient available space for the HIGH level cache; the thread pool for processing the IDLE level cache can perform data elimination according to a longer execution cycle to save system overhead. In summary, those skilled in the art can flexibly determine the execution times and execution timing of the steps S230 and S240 in various ways according to actual needs, and the invention is not limited thereto. Therefore, by the aid of the cache level division and the application of the thread pool technology, more flexibility and controllability are provided for data elimination operation, and requirements of various scenes can be met.
In the embodiment of the present invention, the specific method for performing data elimination may be flexibly set by a person skilled in the art, and the present invention is not limited thereto. For example, the elimination may be performed according to various factors such as data writing time, data writing times, data temperature attributes, data types, and the like. In this embodiment, the data elimination method may be: and calculating the temperature attribute value of each data in the cache according to the total write-in times of each data in the cache and a preset temperature attribute calculation rule, and determining the elimination sequence of each data in the cache according to the temperature attribute value.
The preset temperature attribute calculation rule is a rule set by a person skilled in the art according to actual conditions to calculate the hot degree of each cache data. Here, the hot degree of the cache data may be determined by the total number of times the cache data is written, and/or the storage period of the cache data, and the like. Specifically, when the temperature attribute value of each cache data is calculated, the temperature attribute value of each cache data can be calculated independently according to the total write-in times of each cache data; the temperature attribute value of each cache data may be further calculated in combination with other factors. The specific calculation rule of the temperature attribute value is not limited, and the actual requirement of a user can be met.
After the temperature attribute values of the cache data are calculated, the cache data with the lowest temperature attribute values are sequentially eliminated according to the sequence from low to high of the calculated temperature attribute values, so that the effect of eliminating the data according to the hot degree of the cache data is realized, and the cache space is timely and effectively released.
In addition, various modifications and alterations to the above described arrangements may be practiced by those skilled in the art. For example, when determining the temperature attribute according to the total number of writing times, in addition to directly determining according to the numerical value of the total number of writing times, the total number of writing times may be divided into a plurality of numerical value intervals in advance, corresponding interval scores may be set for each numerical value interval, and the temperature attribute value may be determined according to the interval scores. For example, when the total number of writing times belongs to the numerical range of [ 0, 10 ], the section score is 1; when the total writing times belong to the numerical interval of [ 10, 50 ], the interval score is 5; when the total writing times belong to the numerical range of [ 50, 100 ], the score of the range is 10. Data with the total number of writes within a certain interval can be more flexibly determined as hot-gate data through the interval score. Moreover, in order to make the data elimination manner more flexible, the preset temperature attribute calculation rule may further include: the cache duration corresponding to the cache is further divided into a plurality of cache time periods in advance, and corresponding time period weight values are respectively set for each cache time period; and for each cache data, determining the temperature attribute value of the cache data according to the time interval weight value of the corresponding cache time interval when the cache data is written in each time. The cache duration may be: the time length is defined by a first data writing time corresponding to the data with the earliest writing time in the buffer and a second data writing time corresponding to the data with the latest writing time. Moreover, the buffering duration may also be a preset duration, for example, assuming that a buffer is dedicated to store the latest buffer data within three hours, once the writing time of the buffer data written into the buffer exceeds three hours, the buffer is automatically deleted, and the buffering duration of the buffer is 3 hours. When the cache duration is divided into a plurality of cache periods, the entire cache duration may be divided into a plurality of equal cache periods, or the entire cache duration may be divided into a plurality of unequal cache periods. In order to facilitate calculation of the temperature attribute of the cache data according to the cache time period, after the above division, optionally, a time period data table corresponding to the cache time period may be set for each cache time period, where each time period data table is used to record the cache data written in the corresponding cache time period. In order to determine the elimination sequence of the cache data according to the cache time periods, in this embodiment, corresponding time period weight values need to be set for the divided cache time periods, and the setting manners are also various. Specifically, the weight values of the periods may be set to be equal, so that the temperature attribute values of the cache data are calculated in a manner of focusing more on the number of times of occurrence of each cache data; alternatively, the increasing (or decreasing) time interval weight values may be set in chronological order of the buffer time intervals, so that the temperature attribute values of the data in each buffer may be calculated with emphasis on combining the number of occurrences of the buffer data with the write time. Here, the weight value setting of each time period is determined by those skilled in the art according to the actual situation, and the present invention is not limited thereto. In a word, through the setting mode of the cache time interval and the time interval weight value, a user can preferentially eliminate data in an unimportant time interval according to actual requirements, and the elimination scheme is more flexible.
In addition, the preset grade division rule can be divided according to the storage space and can be further divided according to other factors such as the type of data stored in the cache, and in short, the invention does not limit the division mode of the cache grade and the weight setting mode of the thread pool.
In the embodiment of the invention, the thread pools run in parallel, so that the data processing efficiency can be further improved.
Therefore, the data elimination method based on multiple caches provided by the second embodiment of the invention can divide multiple cache levels according to the preset level division rule, and respectively create a matched thread pool for each cache level; respectively scanning each cache by using a plurality of threads in each thread pool, and determining the cache level of each cache according to the scanning result and a level division rule; and utilizing a plurality of threads in each thread pool to eliminate the data in the cache with the cache level matched with the thread pool. Therefore, the problem of low single-thread processing efficiency in the prior art is solved, multi-thread parallel simultaneous data elimination operation on cache sets with different weights is realized, the data elimination processing efficiency is greatly improved while the consistency is ensured, and meanwhile, the elimination priority of the cache sets can be ensured while the parallel operation is realized by improving the thread pool.
EXAMPLE III
Fig. 3 is a schematic structural diagram illustrating a data elimination apparatus based on multiple caches according to a third embodiment of the present invention, as shown in the figure, the apparatus includes: a partitioning module 310, a scanning module 320, and an elimination module 330.
The dividing module 310 is configured to divide multiple cache levels according to a preset level division rule, and create a matched thread pool for each cache level.
The preset grade division rule is used for dividing each cache into different grades according to different use conditions of the cache, and each cache in the same grade has similar use conditions. The rating is manually defined by the skilled person. For the specific content of the preset rating rule, the third embodiment of the present invention does not specifically limit this, and those skilled in the art can flexibly set the rating rule according to the actual situation.
In order to improve the processing efficiency of data elimination, the partitioning module 310 creates a matching thread pool for each cache level, where each thread pool includes a plurality of threads. And a plurality of threads in each thread pool are used for the data elimination processing of the cache of the corresponding level. Because the use conditions of the caches in different levels are different, in order to optimize the resource configuration as much as possible, the number of threads in the thread pools corresponding to different levels can also be different.
The thread pool technology is adopted because if a corresponding thread is set for each cache to process, a large amount of system resources are consumed, and the real operability is not realized.
The scanning module 320 is configured to scan each cache by using multiple threads in each thread pool, and determine a cache level of each cache according to a scanning result and a level division rule.
When a plurality of threads eliminate data of all caches, if the matching relationship between the threads and the caches is not limited, the two threads may process the same cache at the same time, and at this time, the two threads may conflict with each other, resulting in a series of problems. Therefore, the embodiment of the invention effectively avoids the conflict situation and optimizes the working process by carrying out grade division on all the caches and specifying the corresponding processing relation between each thread and the caches in different grades.
Specifically, the scanning module 320 scans each cache by using a plurality of threads in each thread pool, and determines a cache level for each scanned cache according to a scanning result and a level division rule, so as to be used for subsequent processing with pertinence.
And the elimination module 330 is configured to eliminate, by using multiple threads in each thread pool, data in the cache whose cache level matches the thread pool.
Specifically, according to the cache levels determined by the scanning module 320, the elimination module 330 performs data elimination on the cache with corresponding cache levels by using a plurality of threads in the thread pool matched with each cache level. For the specific method of data elimination, the third embodiment of the present invention does not specifically limit this, and those skilled in the art can flexibly set the method according to the actual situation.
The specific structure and operation principle of each module described above may refer to the description of the corresponding part in the method embodiment, and are not described herein again.
Therefore, the data elimination device based on multiple caches provided by the embodiment of the invention can divide multiple cache levels according to a preset level division rule and respectively create a matched thread pool for each cache level; respectively scanning each cache by using a plurality of threads in each thread pool, and determining the cache level of each cache according to the scanning result and a level division rule; and utilizing a plurality of threads in each thread pool to eliminate the data in the cache with the cache level matched with the thread pool. Therefore, the cache is divided into a plurality of cache levels, and the corresponding thread pools are respectively established for the cache levels, so that the number of threads in each thread pool can be better adjusted according to the cache levels; and the data elimination processing efficiency is greatly improved by a parallel processing mode of a plurality of thread pools.
Example four
Fig. 4 is a schematic structural diagram illustrating a data elimination apparatus based on multiple caches according to a fourth embodiment of the present invention, as shown in the drawing, the apparatus includes: a partitioning module 410, a weighting module 420, a scanning module 430, and an elimination module 440.
The dividing module 410 is configured to divide multiple cache levels according to a preset level division rule, and create a matched thread pool for each cache level.
The preset grade division rule is used for dividing each cache into different grades according to different use conditions of the cache, and each cache in the same grade has similar use conditions. In an embodiment of the present invention, the hierarchical division rule includes: dividing the cache level according to the ratio between the residual storage space and the total storage space of the cache, wherein the larger the ratio between the residual storage space and the total storage space is, the higher the cache level is; the smaller the ratio between the remaining storage space and the total storage space, the lower the cache level. For example, assuming that the cache levels are divided into three levels, a HIGH (HIGH) level, a LOW (LOW) level, and an IDLE (IDLE) level, respectively, wherein a cache in which a ratio between a remaining storage space of the cache and a total storage space is more than 60% is determined as the HIGH level; determining a cache with a ratio between the remaining storage space and the total storage space of the cache between 30% and 60% as a LOW level; and determining the buffer memory with the ratio of the residual storage space of the buffer memory to the total storage space below 30 percent as the IDLE level.
In order to improve the processing efficiency of data elimination, the partitioning module 410 creates a matching thread pool for each cache level, where each thread pool includes a plurality of threads. And a plurality of threads in each thread pool are used for the data elimination processing of the cache of the corresponding level. Because the use conditions of the caches in different levels are different, in order to optimize the resource configuration as much as possible, the number of threads in the thread pools corresponding to different levels is different.
The weight module 420 is configured to set a corresponding weight value for each thread pool, and set the number of threads included in each thread pool according to the weight value of each thread pool.
For a specific setting method of the weight value, for each thread pool, the weight module 420 sets the weight value corresponding to the thread pool according to the cache level matched with the thread pool, wherein the higher the cache level matched with the thread pool is, the larger the weight value of the thread pool is; conversely, the lower the cache level matching the thread pool, the smaller the weight value of the thread pool. The larger the weight value of the thread pool is, the more the number of threads contained in the thread pool is; the smaller the weight value of the thread pool, the smaller the number of threads contained in the thread pool. Thus, the number of threads contained within each thread pool is dynamically variable.
The scanning module 430 is configured to scan each cache by using multiple threads in each thread pool, and determine a cache level of each cache according to a scanning result and a level division rule.
When a plurality of threads eliminate all caches, if the matching relationship between the threads and the caches is not limited, two threads process the same cache, and at the moment, the two threads conflict with each other, which leads to a series of problems. Therefore, the embodiment of the invention effectively avoids the conflict situation and optimizes the working process by carrying out grade division on all the caches and specifying the corresponding processing relation between each thread and the caches in different grades.
Specifically, the scanning module 430 scans each cache by using a plurality of threads in each thread pool, and determines a cache level for each scanned cache according to a scanning result and a level division rule, so as to be used for subsequent processing with pertinence.
Correspondingly, the method for setting the weight value of the thread pool may further include: periodically obtaining the scanning result of each thread pool, and determining the cache quantity corresponding to each cache level according to the scanning result; and then, adjusting the weight value of each thread pool according to the cache quantity corresponding to each cache level, and adjusting the quantity of threads contained in each thread pool according to the adjusted weight value of each thread pool. The more the cache quantity corresponding to the cache level is, the larger the weight value of the thread pool matched with the cache level is; conversely, the smaller the cache number corresponding to a cache level, the smaller the weight value of the thread pool matching the cache level. The weighted value of the thread pool is determined according to the number of the caches, so that the number of the threads contained in each thread pool is determined, the number of the threads in each thread pool can accurately meet the processing operation of each cache in the corresponding cache level, resources are reasonably used, and the cost is saved.
In other embodiments, the weight value setting method of the thread pool provided in the weight module 420 and the scanning module 430 may be adopted in combination, so as to set a more reasonable weight value of the thread pool. In addition, the weight value of the thread pool can be further determined according to various factors such as the type and the importance degree of the cache of the corresponding level.
Elimination module 440: and utilizing a plurality of threads in each thread pool to eliminate the data in the cache with the cache level matched with the thread pool.
Specifically, according to the cache levels determined by the above modules, the elimination module 440 performs data elimination on the cache having the corresponding cache level by using a plurality of threads in the thread pool matched with each cache level. Each thread pool may only process the cache of one cache level, for example, the cache is divided into three levels, i.e., HIGH level, LOW level and IDLE level in the dividing module 410, so only three thread pools are needed to correspond to the cache. Specifically, thread pool 1 corresponds to a HIGH level, thread pool 2 corresponds to a LOW level, and thread pool 3 corresponds to an IDLE level, in which case all threads in thread pool 1 only process all caches in the HIGH level, all threads in thread pool 2 only process all caches in the LOW level, and all threads in thread pool 3 only process all caches in the IDLE level. Of course, each thread pool may also be used to handle multiple cache levels of cache when there are more cache levels. For example, when the cache level includes six levels, the processing may be performed by three thread pools, each thread pool processing two levels of cache.
In a word, the cache scanning and data elimination can be more flexibly realized through the cache level division and the application of the thread pool technology. In addition, the scanning module 430 and the eliminating module 440 may both be repeatedly operated, for example, the scanning module 430 may be operated once every a preset first time interval, and the eliminating module 440 may be operated once every a preset second time interval. The first time interval and the second time interval may be equal or different. The first time interval and the second time interval may be fixed values or dynamically changing values. For example, the first time interval may be dynamically adjusted according to the scanning result: when the cache number of the HIGH level in the scanning result is more, reducing a first time interval; the first time interval is increased when the number of HIGH level buffers in the scan result is small. In addition, in each operation process of the elimination module 440, each thread pool may perform elimination on the cache of the corresponding level according to the same execution cycle, or may perform elimination on the cache of the corresponding level according to different execution cycles. For example, a thread pool for processing a HIGH level cache may perform data eviction operations according to a shorter execution cycle to prevent insufficient available space for the HIGH level cache; the thread pool for processing the IDLE level cache can perform data elimination according to a longer execution cycle to save system overhead. In summary, those skilled in the art can flexibly determine the operation times and operation timings of the scan module 430 and the reject module 440 according to actual needs in various ways, which is not limited by the invention. Therefore, by the aid of the cache level division and the application of the thread pool technology, more flexibility and controllability are provided for data elimination operation, and requirements of various scenes can be met.
In the embodiment of the present invention, the specific method for performing data elimination by the elimination module 440 may be flexibly set by a person skilled in the art, which is not limited by the present invention. For example, the elimination may be performed according to various factors such as data writing time, data writing times, data temperature attributes, data types, and the like. In this embodiment, the data elimination method may be: and calculating the temperature attribute value of each data in the cache according to the total write-in times of each data in the cache and a preset temperature attribute calculation rule, and determining the elimination sequence of each data in the cache according to the temperature attribute value.
The preset temperature attribute calculation rule is a rule set by a person skilled in the art according to actual conditions to calculate the hot degree of each cache data. Here, the hot degree of the cache data may be determined by the total number of times the cache data is written, and/or the storage period of the cache data, and the like. Specifically, when the temperature attribute value of each cache data is calculated, the temperature attribute value of each cache data can be calculated independently according to the total write-in times of each cache data; the temperature attribute value of each cache data may be further calculated in combination with other factors. The specific calculation rule of the temperature attribute value is not limited, and the actual requirement of a user can be met.
After the temperature attribute values of the cache data are calculated, the cache data with the lowest temperature attribute values are sequentially eliminated according to the sequence from low to high of the calculated temperature attribute values, so that the effect of eliminating the data according to the hot degree of the cache data is realized, and the cache space is timely and effectively released.
In addition, various modifications and alterations to the above described arrangements may be practiced by those skilled in the art. For example, when determining the temperature attribute according to the total number of writing times, in addition to directly determining according to the numerical value of the total number of writing times, the total number of writing times may be divided into a plurality of numerical value intervals in advance, corresponding interval scores may be set for each numerical value interval, and the temperature attribute value may be determined according to the interval scores. For example, when the total number of writing times belongs to the numerical range of [ 0, 10 ], the section score is 1; when the total writing times belong to the numerical interval of [ 10, 50 ], the interval score is 5; when the total writing times belong to the numerical range of [ 50, 100 ], the score of the range is 10. Data with the total number of writes within a certain interval can be more flexibly determined as hot-gate data through the interval score. Moreover, in order to make the data elimination manner more flexible, the preset temperature attribute calculation rule may further include: the cache duration corresponding to the cache is further divided into a plurality of cache time periods in advance, and corresponding time period weight values are respectively set for each cache time period; and for each cache data, determining the temperature attribute value of the cache data according to the time interval weight value of the corresponding cache time interval when the cache data is written in each time. The cache duration may be: the time length is defined by a first data writing time corresponding to the data with the earliest writing time in the buffer and a second data writing time corresponding to the data with the latest writing time. Moreover, the buffering duration may also be a preset duration, for example, assuming that a buffer is dedicated to store the latest buffer data within three hours, once the writing time of the buffer data written into the buffer exceeds three hours, the buffer is automatically deleted, and the buffering duration of the buffer is 3 hours. When the cache duration is divided into a plurality of cache periods, the entire cache duration may be divided into a plurality of equal cache periods, or the entire cache duration may be divided into a plurality of unequal cache periods. In order to facilitate calculation of the temperature attribute of the cache data according to the cache time period, after the above division, optionally, a time period data table corresponding to the cache time period may be set for each cache time period, where each time period data table is used to record the cache data written in the corresponding cache time period. In order to determine the elimination sequence of the cache data according to the cache time periods, in this embodiment, corresponding time period weight values need to be set for the divided cache time periods, and the setting manners are also various. Specifically, the weight values of the periods may be set to be equal, so that the temperature attribute values of the cache data are calculated in a manner of focusing more on the number of times of occurrence of each cache data; alternatively, the increasing (or decreasing) time interval weight values may be set in chronological order of the buffer time intervals, so that the temperature attribute values of the data in each buffer may be calculated with emphasis on combining the number of occurrences of the buffer data with the write time. Here, the weight value setting of each time period is determined by those skilled in the art according to the actual situation, and the present invention is not limited thereto. In a word, through the setting mode of the cache time interval and the time interval weight value, a user can preferentially eliminate data in an unimportant time interval according to actual requirements, and the elimination scheme is more flexible.
In addition, the preset grade division rule can be divided according to the storage space and can be further divided according to other factors such as the type of data stored in the cache, and in short, the invention does not limit the division mode of the cache grade and the weight setting mode of the thread pool.
In the embodiment of the invention, the thread pools run in parallel, so that the data processing efficiency can be further improved.
The specific structure and operation principle of each module described above may refer to the description of the corresponding part in the method embodiment, and are not described herein again.
Therefore, the data elimination device based on multiple caches provided by the fourth embodiment of the invention can divide multiple cache levels according to the preset level division rule, and respectively create a matched thread pool for each cache level; respectively scanning each cache by using a plurality of threads in each thread pool, and determining the cache level of each cache according to the scanning result and a level division rule; and utilizing a plurality of threads in each thread pool to eliminate the data in the cache with the cache level matched with the thread pool. Therefore, the problem of low single-thread processing efficiency in the prior art is solved, multi-thread parallel simultaneous data elimination operation on cache sets with different weights is realized, the data elimination processing efficiency is greatly improved while the consistency is ensured, and meanwhile, the elimination priority of the cache sets can be ensured while the parallel operation is realized by improving the thread pool.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components of a multiple buffer based data elimination apparatus according to embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

Claims (14)

1. A data elimination method based on multiple caches comprises the following steps:
dividing a plurality of cache levels according to a preset level division rule, and respectively creating a matched thread pool for each cache level; each thread pool comprises a plurality of threads;
respectively scanning each cache by using a plurality of threads in each thread pool, and determining the cache level of each cache according to the scanning result and the level division rule;
and utilizing a plurality of threads in each thread pool to eliminate the data in the cache with the cache level matched with the thread pool.
2. The method of claim 1, further comprising: respectively setting corresponding weight values for each thread pool, and setting the number of threads contained in each thread pool according to the weight values of each thread pool; the larger the weight value of the thread pool is, the larger the number of threads contained in the thread pool is.
3. The method according to claim 2, wherein the step of setting the corresponding weight values for the thread pools respectively and the step of setting the number of threads included in each thread pool according to the weight values of each thread pool specifically includes:
periodically obtaining the scanning result of each thread pool, and determining the cache quantity corresponding to each cache level according to the scanning result;
adjusting the weight value of each thread pool according to the cache quantity corresponding to each cache level, and adjusting the quantity of threads contained in each thread pool according to the adjusted weight value of each thread pool;
the more the cache number corresponding to the cache level is, the larger the weight value of the thread pool matched with the cache level is.
4. The method according to claim 2 or 3, wherein the step of setting the corresponding weight values for the respective thread pools further comprises:
setting a weight value corresponding to each thread pool according to the cache level matched with the thread pool; the higher the cache level matched with the thread pool is, the larger the weight value of the thread pool is.
5. The method according to any one of claims 1-3, wherein the preset ranking rule comprises: and dividing the cache level according to the ratio of the residual storage space to the total storage space of the cache, wherein the larger the ratio of the residual storage space to the total storage space is, the higher the cache level is.
6. The method according to any one of claims 1 to 3, wherein the step of using the plurality of threads in each thread pool to evict the data in the cache whose cache level matches the thread pool specifically comprises:
calculating the temperature attribute value of each data in the cache according to the total write-in times of each data in the cache and a preset temperature attribute calculation rule, and determining the elimination sequence of each data in the cache according to the temperature attribute value;
the temperature attribute calculation rule is used for calculating the hot degree of each cache data, and the hot degree of the cache data is determined by the total number of times of writing the cache data and/or the storage period of the cache data; and calculating the temperature attribute value of each cache data according to the total writing times of each cache data.
7. A method according to any one of claims 1 to 3 wherein the thread pools run in parallel with each other.
8. A data eviction device based on multiple caches, comprising:
the system comprises a dividing module, a processing module and a processing module, wherein the dividing module is used for dividing a plurality of cache levels according to a preset level dividing rule and respectively establishing a matched thread pool for each cache level; each thread pool comprises a plurality of threads;
the scanning module is used for respectively scanning each cache by utilizing a plurality of threads in each thread pool and determining the cache level of each cache according to the scanning result and the level division rule;
and the elimination module is used for eliminating the data in the cache with the cache level matched with the thread pool by utilizing the threads in each thread pool.
9. The apparatus of claim 8, further comprising: the weight module is used for setting corresponding weight values for the thread pools respectively and setting the number of threads contained in each thread pool according to the weight values of the thread pools; the larger the weight value of the thread pool is, the larger the number of threads contained in the thread pool is.
10. The apparatus of claim 9, wherein the weighting module is specifically configured to:
periodically obtaining the scanning result of each thread pool, and determining the cache quantity corresponding to each cache level according to the scanning result;
adjusting the weight value of each thread pool according to the cache quantity corresponding to each cache level, and adjusting the quantity of threads contained in each thread pool according to the adjusted weight value of each thread pool;
the more the cache number corresponding to the cache level is, the larger the weight value of the thread pool matched with the cache level is.
11. The apparatus of claim 9 or 10, wherein the weighting module is further to:
setting a weight value corresponding to each thread pool according to the cache level matched with the thread pool; the higher the cache level matched with the thread pool is, the larger the weight value of the thread pool is.
12. The apparatus according to any one of claims 8-10, wherein the preset ranking rule comprises: and dividing the cache level according to the ratio of the residual storage space to the total storage space of the cache, wherein the larger the ratio of the residual storage space to the total storage space is, the higher the cache level is.
13. The apparatus of any one of claims 8-10, wherein the culling module is specifically configured to:
calculating the temperature attribute value of each data in the cache according to the total write-in times of each data in the cache and a preset temperature attribute calculation rule, and determining the elimination sequence of each data in the cache according to the temperature attribute value;
the temperature attribute calculation rule is used for calculating the hot degree of each cache data, and the hot degree of the cache data is determined by the total number of times of writing the cache data and/or the storage period of the cache data; and calculating the temperature attribute value of each cache data according to the total writing times of each cache data.
14. The apparatus of any of claims 8-10, wherein the thread pools run in parallel with each other.
CN201611246005.8A 2016-12-29 2016-12-29 Data elimination method and device based on multiple caches Active CN106649139B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201611246005.8A CN106649139B (en) 2016-12-29 2016-12-29 Data elimination method and device based on multiple caches
PCT/CN2017/115616 WO2018121242A1 (en) 2016-12-29 2017-12-12 Multiple buffer-based data elimination method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611246005.8A CN106649139B (en) 2016-12-29 2016-12-29 Data elimination method and device based on multiple caches

Publications (2)

Publication Number Publication Date
CN106649139A CN106649139A (en) 2017-05-10
CN106649139B true CN106649139B (en) 2020-01-10

Family

ID=58836170

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611246005.8A Active CN106649139B (en) 2016-12-29 2016-12-29 Data elimination method and device based on multiple caches

Country Status (2)

Country Link
CN (1) CN106649139B (en)
WO (1) WO2018121242A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106649139B (en) * 2016-12-29 2020-01-10 北京奇虎科技有限公司 Data elimination method and device based on multiple caches
CN107301215B (en) * 2017-06-09 2020-12-18 北京奇艺世纪科技有限公司 Search result caching method and device and search method and device
CN107608911B (en) * 2017-09-12 2020-09-22 苏州浪潮智能科技有限公司 Cache data flashing method, device, equipment and storage medium
CN110795632B (en) * 2019-10-30 2022-10-04 北京达佳互联信息技术有限公司 State query method and device and electronic equipment
CN111078585B (en) * 2019-11-29 2022-03-29 智器云南京信息科技有限公司 Memory cache management method, system, storage medium and electronic equipment
CN111552652B (en) * 2020-07-13 2020-11-17 深圳鲲云信息科技有限公司 Data processing method and device based on artificial intelligence chip and storage medium
CN115729767A (en) * 2021-08-30 2023-03-03 华为技术有限公司 Temperature detection method and device for memory

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1334938A (en) * 1998-12-08 2002-02-06 英特尔公司 Buffer memory management in system haivng multiple execution entities
CN101561783A (en) * 2008-04-14 2009-10-21 阿里巴巴集团控股有限公司 Method and device for Cache asynchronous elimination
CN101609432A (en) * 2009-07-13 2009-12-23 中国科学院计算技术研究所 Shared buffer memory management system and method
CN103279429A (en) * 2013-05-24 2013-09-04 浪潮电子信息产业股份有限公司 Application-aware distributed global shared cache partition method
CN103399856A (en) * 2013-07-01 2013-11-20 北京科东电力控制系统有限责任公司 Explosive type data caching and processing system for SCADA system and method thereof
CN104881492A (en) * 2015-06-12 2015-09-02 北京京东尚科信息技术有限公司 Cache fragmentation technology based data filtering method and device
CN105404595A (en) * 2014-09-10 2016-03-16 阿里巴巴集团控股有限公司 Cache management method and apparatus

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9822550D0 (en) * 1998-10-15 1998-12-09 British Telecomm Computer communications
US6990557B2 (en) * 2002-06-04 2006-01-24 Sandbridge Technologies, Inc. Method and apparatus for multithreaded cache with cache eviction based on thread identifier
US7200713B2 (en) * 2004-03-29 2007-04-03 Intel Corporation Method of implementing off-chip cache memory in dual-use SRAM memory for network processors
CN102541460B (en) * 2010-12-20 2014-10-08 中国移动通信集团公司 Multiple disc management method and equipment
CN103345452B (en) * 2013-07-18 2015-06-10 福建瑞聚信息技术股份有限公司 Data caching method in multiple buffer storages according to weight information
CN106649139B (en) * 2016-12-29 2020-01-10 北京奇虎科技有限公司 Data elimination method and device based on multiple caches

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1334938A (en) * 1998-12-08 2002-02-06 英特尔公司 Buffer memory management in system haivng multiple execution entities
CN101561783A (en) * 2008-04-14 2009-10-21 阿里巴巴集团控股有限公司 Method and device for Cache asynchronous elimination
CN101609432A (en) * 2009-07-13 2009-12-23 中国科学院计算技术研究所 Shared buffer memory management system and method
CN103279429A (en) * 2013-05-24 2013-09-04 浪潮电子信息产业股份有限公司 Application-aware distributed global shared cache partition method
CN103399856A (en) * 2013-07-01 2013-11-20 北京科东电力控制系统有限责任公司 Explosive type data caching and processing system for SCADA system and method thereof
CN105404595A (en) * 2014-09-10 2016-03-16 阿里巴巴集团控股有限公司 Cache management method and apparatus
CN104881492A (en) * 2015-06-12 2015-09-02 北京京东尚科信息技术有限公司 Cache fragmentation technology based data filtering method and device

Also Published As

Publication number Publication date
WO2018121242A1 (en) 2018-07-05
CN106649139A (en) 2017-05-10

Similar Documents

Publication Publication Date Title
CN106649139B (en) Data elimination method and device based on multiple caches
US10762000B2 (en) Techniques to reduce read-modify-write overhead in hybrid DRAM/NAND memory
DE69816044T2 (en) TIMELINE BASED CACHE STORAGE AND REPLACEMENT TECHNIQUES
CN107526546B (en) Spark distributed computing data processing method and system
US10089014B2 (en) Memory-sampling based migrating page cache
US8898435B2 (en) Optimizing system throughput by automatically altering thread co-execution based on operating system directives
KR102490908B1 (en) Resource scheduling method and terminal device
US9304920B2 (en) System and method for providing cache-aware lightweight producer consumer queues
US20110238919A1 (en) Control of processor cache memory occupancy
CN107479860A (en) A kind of forecasting method of processor chips and instruction buffer
WO2023050712A1 (en) Task scheduling method for deep learning service, and related apparatus
CN108984130A (en) A kind of the caching read method and its device of distributed storage
CN103885728A (en) Magnetic disk cache system based on solid-state disk
US8522245B2 (en) Thread criticality predictor
US20130275649A1 (en) Access Optimization Method for Main Memory Database Based on Page-Coloring
KR20160055273A (en) Memory resource optimization method and apparatus
CN106681665B (en) Persistent storage method and device for cache data
CN101201933A (en) Plot treatment unit and method
Daly et al. Cache restoration for highly partitioned virtualized systems
CN105512051A (en) Self-learning type intelligent solid-state hard disk cache management method and device
US8732404B2 (en) Method and apparatus for managing buffer cache to perform page replacement by using reference time information regarding time at which page is referred to
CN113672169A (en) Data reading and writing method of stream processing system and stream processing system
CN106681837B (en) Data elimination method and device based on data table
CN106970998B (en) News data updating method and device
US9141543B1 (en) Systems and methods for writing data from a caching agent to main memory according to a pre-clean criterion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant