WO2013086689A1 - 替换缓存对象的方法和装置 - Google Patents

替换缓存对象的方法和装置 Download PDF

Info

Publication number
WO2013086689A1
WO2013086689A1 PCT/CN2011/083896 CN2011083896W WO2013086689A1 WO 2013086689 A1 WO2013086689 A1 WO 2013086689A1 CN 2011083896 W CN2011083896 W CN 2011083896W WO 2013086689 A1 WO2013086689 A1 WO 2013086689A1
Authority
WO
WIPO (PCT)
Prior art keywords
cache object
cache
weight value
cached
accesses
Prior art date
Application number
PCT/CN2011/083896
Other languages
English (en)
French (fr)
Inventor
郑辉
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2011/083896 priority Critical patent/WO2013086689A1/zh
Priority to CN201180003186.0A priority patent/CN103548005B/zh
Publication of WO2013086689A1 publication Critical patent/WO2013086689A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • G06F12/126Replacement control using replacement algorithms with special data handling, e.g. priority of data or instructions, handling errors or pinning

Definitions

  • the present invention relates to the field of information technology, and in particular, to a method and apparatus for replacing a cached object in the field of information technology. Background technique
  • a key issue in caching technology is the way to replace cached objects. Due to the limited storage capacity of the buffer, when the storage area is full, the new cached object cannot be stored. In this case, a part of the cached object with a low current storage value needs to be replaced. Therefore, the method of replacing the cache object also determines the index of the cache hit rate.
  • LRU Least Recently Used
  • LFU least frequently used
  • the LRU method replaces the object that has not been accessed for a long time in the buffer.
  • the basic idea is to use the principle of locality, that is, the object that was recently accessed, which is likely to be accessed later. Conversely, already Objects that have not been accessed for a long time are likely to be accessed for a longer period of time in the future.
  • the LRU method can better adapt to the change of data access according to the latest access time, but the method does not consider the access characteristics of the long-term history of the data, and does not consider the relationship between the objects, and the hit rate of the buffer is low.
  • the LFU method replaces the object with the least number of accesses out of the buffer, and the data with high frequency of use in the past is preferentially stored in the buffer.
  • the basic idea is that the object with the most recent access is more likely to be accessed again.
  • the LFU method uses the access frequency of the data, which is beneficial to the overall optimized use of the data, but the method may cause the object that is frequently accessed to stay in the buffer for a long time, affecting the hit rate of the buffer. Therefore, a technical solution is needed to replace the cache object and increase the cache hit rate.
  • the embodiment of the invention provides a method and a device for replacing a cache object, which can replace the cache object when the number of cache objects stored in the buffer or the occupied memory reaches a maximum value, and improve the hit rate of the cache.
  • an embodiment of the present invention provides a method for replacing a cache object, the method comprising: receiving a first cache object that needs to be stored in a buffer; and storing at least one cache object or memory occupied in the buffer When the maximum value is reached, obtaining a weight value of each cache object in the at least one cache object; determining, according to the weight value of each cache object, a second cache object having the smallest weight value in the at least one cache object; The second cache object saved in is replaced with the first cache object.
  • an embodiment of the present invention provides an apparatus for replacing a cache object, the apparatus comprising: a receiving module, configured to receive a first cache object that needs to be stored in a buffer; and an acquiring module, configured to save in the buffer Obtaining a weight value of each cache object in the at least one cache object when the number of the at least one cache object or the occupied memory reaches a maximum value; the first determining module is configured to acquire, according to the cache module, the cached object a weight value, a second cache object having a minimum weight value in the at least one cache object; a replacement module, configured to replace the second cache object determined by the first determining module saved in the buffer with the second cache object received by the receiving module The first cache object.
  • the method and apparatus for replacing a cache object based on the weight value of each cache object, by replacing the cache object with the smallest weight value, the number of cache objects that can be saved in the buffer or occupied When the memory reaches the maximum value, the cached object that is the least likely to be accessed is removed, and the cache hit rate is increased, thereby improving the performance of the system.
  • FIG. 1 is a schematic flowchart of a method of replacing a cache object according to an embodiment of the present invention.
  • FIG. 2 is another schematic flowchart of a method of replacing a cache object according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram showing a hierarchical structure between cached objects according to an embodiment of the present invention.
  • FIG. 4 is a schematic flow diagram of a method of determining a weight value of a cached object, in accordance with an embodiment of the present invention.
  • Figure 5 is a schematic flow diagram of a method of determining the number of accesses to a cached object, in accordance with an embodiment of the present invention.
  • FIG. 6 is a schematic block diagram of an apparatus for replacing a cache object in accordance with an embodiment of the present invention.
  • FIG. 7 is another schematic block diagram of an apparatus for replacing a cache object according to an embodiment of the present invention.
  • FIG. 8 is a schematic block diagram of a second determining module according to an embodiment of the present invention.
  • FIG. 9 is a schematic block diagram of a second determining unit according to an embodiment of the present invention. detailed description
  • FIG. 1 shows a schematic flow diagram of a method 100 of replacing a cached object in accordance with an embodiment of the present invention. As shown in FIG. 1, the method 100 includes:
  • S140 Replace the second cache object saved in the buffer with the first cache object.
  • an application system such as a WEB application
  • the application system may Obtaining a weight value of each cache object in the at least one cache object, and determining, according to the weight value of each cache object, a second cache object having the smallest weight value in the at least one cache object, so that the application system can
  • the second cache object saved in the buffer is replaced with the first cache object. This allows the cache to replace cached objects with relatively low storage values and store newly received cached objects.
  • the method for replacing a cache object is based on the weight of each cached object. Value, by replacing the cache object with the smallest weight value, when the number of cached objects or the memory occupied by the buffer reaches the maximum value, the cached object that is the least likely to be accessed is removed, and the hit rate of the buffer is improved, thereby enabling Improve system performance.
  • the number of at least one cache object saved in the buffer does not reach a maximum value, and when the memory occupied by the at least one cache object does not reach the maximum value, if a new cache object needs to be stored in the buffer, then The new cache object is directly stored in the buffer; of course, when the number of at least one cache object or the memory occupied by the buffer does not reach a maximum value, the cache with the smallest weight value in the memory may also be determined according to an embodiment of the present invention.
  • the method 100 may further include:
  • S150 Determine, according to a hierarchical relationship and/or an operation type of each cache object in the at least one cache object, a weight value of each cache object in the at least one cache object.
  • determining the weight value of each cache object in the at least one cache object including: periodically determining a weight value of each cache object, and/or reaching a memory amount or occupied memory of the at least one cache object The maximum value is determined by the weight value of each cached object; the obtaining the weight value of each cached object in the at least one cached object includes: obtaining a weight value of each cached object determined periodically, or acquiring the The weight value of each cache object determined when the number of at least one cache object or the occupied memory reaches a maximum value.
  • the application system may also determine the weight value of each cache object in the buffer when the cached object needs to be stored in the buffer, so that the cached object with the smallest weight value may be determined and replaced. deal with. It should also be understood that the weight value of each cache object may also be determined according to other attributes of the cache object to replace the cache object with a smaller storage value.
  • the cached objects in the application system usually do not exist in isolation, and the objects may be constructed into a hierarchical structure.
  • each object may constitute a tree hierarchy, or each object may constitute a one-to-many relationship.
  • Hierarchy, and in general, access to the underlying object may also be accessible to the underlying object.
  • each cache object U1, T1, T2, M1, R1, R2, R3, D1, D2, D3, and D4 may have a hierarchical structure as shown in FIG. 3, U1 may represent a user, and T1, ⁇ 2 may represent table information.
  • Rl, R2, and R3 may represent row information of the table, and D1, D2, and D3 may represent data information in the row.
  • the operation type of the cache object is different, and the probability or possibility that the cache object is accessed in the future is different. For example, after an update operation on a cached object, it is likely to perform a query operation on the cached object. Therefore, when determining the weight value of the cached object, the embodiment of the present invention can predict the future access situation by considering the operation type of each cached object in the buffer, thereby improving the hit rate of the cached object and further improving the performance of the application system. .
  • the method for replacing the cache object determines the weight value of each cache object according to the hierarchical relationship and/or the operation type of each cache object, and replaces the cache object with the smallest weight value, and can be in the buffer.
  • the cached object that is the least likely to be accessed is removed, and the cache hit rate of the cached object can be further improved, thereby further improving the performance of the system.
  • the application system may determine the weight value of each cached object according to the hierarchical relationship of each cached object. For example, when the cached object of the lower layer is accessed, the weight of the cached object of the upper layer may be increased;
  • the operation type of each cache object determines the weight value of each cache object. For example, for different types of operations of the cache object, different weight values can be set for the cache object.
  • the application system can determine the weight value of each cache object according to the hierarchical relationship of each cache object and the operation type.
  • a method 150 of determining a weight value for each cached object in the at least one cached object is described in detail below in conjunction with FIG.
  • the method 150 includes:
  • the application system acquires an initial weight value of each cache object in the at least one cache object.
  • the initial weight value may be set to a constant, for example, the initial weight value of each cache object is set to 0; the historical weight value of each cache object may also be set to the initial weight value within the current period of time.
  • the weight value of each cache object determined by the Nth (N is a natural number) period may be set as the corresponding cache object in the (N+1)th cycle. Weight value. Therefore, the weight value also takes into account the historical access of the cached object. The feature can more fully determine the storage value of each cache object, thereby further improving the hit rate of the cached object and improving the performance of the application system.
  • the application system determines the number of accesses of each cached object in a period of time according to the hierarchical relationship and operation type of each cached object.
  • the application system can increase the number of accesses of the upper cache object; the application system can also set different access times for the cache object for different types of operations of the cache object.
  • the method 152 of determining the number of accesses of the cached object may include: S1521: acquire a hierarchical relationship included in the key KEY or the value VALUE of each cached object; S1522, according to the time period The hierarchical relationship and operation type of the cache object, the number of accesses corresponding to the operation type of each cache object, and the access corresponding to the operation type of the lower cache object of each cache object held in the buffer The sum of the times is determined as the number of accesses of each cached object within a certain period of time, wherein the number of accesses corresponding to different types of operations is different.
  • the application system can obtain the hierarchical relationship of the cache object according to the key KEY of the cache object or the value VALUE.
  • the upper cache object of the cache object D1 is represented by "U1.T1.R1.D1”
  • the upper cache object of the R1 is T1
  • the upper cache of the T1 is cached.
  • the object is Ul.
  • the application system compares the number of accesses corresponding to the operation type of each cache object according to the hierarchical relationship and the operation type of each cached object for a period of time, and saves the cached object in the buffer.
  • the sum of the corresponding access times of the operation type of the lower cache object in the device is determined as the number of accesses of each cache object in a period of time, wherein the number of accesses corresponding to different operation types is different.
  • the number of accesses corresponding to different operation types is different, including: the number of accesses corresponding to the add (PUT) operation or update (UPDATE) operation of each cache object or the lower cache object is greater than the cache The number of accesses of the object or the underlying cache object's get (GET) operation or delete (DELETE) operation; the number of accesses corresponding to the get operation of each cached object or the lower cached object is greater than the cached object or The number of accesses for the delete operation of the underlying cache object.
  • the cache objects D1, R1, T1, and Ul have a hierarchical structure as shown in FIG. 3.
  • a GET operation or a PUT operation is performed on the cache object D1
  • each cache object D1, R1, T1 The change in the number of accesses to Ul is shown in Table 1.
  • the application system determines the weight value of each cache object according to the initial weight value of each cache object and the number of accesses of each cache object.
  • the weight value of each cache object is determined by the following equation (1):
  • C, ( ⁇ ) W t (TV, + C i (T0)/F ( 1 )
  • c, (ri) is the weight value of the i-th cache object in the at least one cache object in the time period of ⁇ 1 , i is a natural number and i ⁇ l
  • W i ( ⁇ ) is the number of accesses of the i-th cache object in the T1 time period
  • C, (ro) is the initial weight value of the i-th cache object
  • F is the balance constant.
  • the balance constant F is used to dynamically adjust the historical access characteristics of the cache object and the proportion of access times of the cache object in a period of time according to the application scenario, so as to adapt to different application scenarios. For example, when the application scenario is more suitable for evaluating the storage value of the cache object by the historical access feature, F can be set to a larger value; when the application scenario is more suitable for evaluating the storage value of the cached object by the number of accesses in a period of time, You can set F to a relatively small value.
  • the size of the sequence numbers of the above processes does not mean the order of execution, and the order of execution of each process should be determined by its function and internal logic, and should not be taken to the embodiments of the present invention.
  • the implementation process constitutes any limitation.
  • the method for replacing the cache object according to the embodiment of the present invention is based on the cache level of each cache object, the operation type, and the weight value of the historical access feature, and the cache that can be saved in the buffer by replacing the cache object with the smallest weight value.
  • the cached object that is the least likely to be accessed is removed, that is, the cached object with the smallest value is stored, thereby being able to High cache hit rate, which can improve system performance.
  • the application system periodically determines the weight value of each cached object, for example, one cycle every 10 minutes, and the initial weight value is a weight value determined in the previous cycle, where the first cycle
  • the initial weight value is 0, F is set to 2
  • the access order of the objects in 20 minutes is: GET Ul, GET Ul.Tl, GET U1.T2, UPDATE U1.T1.R1, PUT U1.T1.R1.D1 GET Ul.Tl.
  • the corresponding access times increase value is 2; when the GET operation is performed on the cache object, the corresponding access times increase value is 1; when the cache object is deleted DELETE operation, the corresponding access times increase value is 0.
  • the application system After 10 minutes, the application system periodically determines the weight value of each cached object. At this time, the weight values of each cached object in the buffer are as shown in Table 3.
  • the weight value of each cache object determined by the application system is as shown in Table 5.
  • the method for replacing the cache object according to the embodiment of the present invention is based on the consideration level relationship and operation type of each cache object.
  • the weight value of the historical access feature by replacing the cache object with the smallest weight value, when the number of cached objects or the occupied memory held in the buffer reaches a maximum value, the cached object that is the least likely to be accessed is removed, that is, the storage value
  • the smallest cache object which can increase the cache hit rate, which can improve the performance of the system.
  • Figure 6 shows a schematic block diagram of an apparatus 600 for replacing cached objects in accordance with an embodiment of the present invention.
  • the apparatus 600 includes:
  • the receiving module 610 is configured to receive a first cache object that needs to be stored in the buffer.
  • the obtaining module 620 is configured to obtain, when the number of at least one cache object saved in the buffer or the occupied memory reaches a maximum value, obtain a weight value of each cache object in the at least one cache object;
  • a first determining module 630 configured to determine, according to the weight value of each cache object acquired by the obtaining module 620, a second cache object with a minimum weight value in the at least one cache object;
  • the replacement module 640 is configured to replace the second cache object determined by the first determining module 630 saved in the buffer with the first cache object received by the receiving module 610.
  • the device for replacing the cache object removes the cached object with the smallest weight value based on the weight value of each cache object, and can remove the cached object stored in the buffer or the occupied memory reaches a maximum value.
  • the most unlikely cached object to be accessed improve the life of the cache Medium rate, which can improve system performance.
  • the apparatus 600 further includes: a second determining module 650, configured to: according to a hierarchical relationship and/or an operation type of each cache object in the at least one cache object, Determining a weight value for each cache object in the at least one cache object.
  • a second determining module 650 configured to: according to a hierarchical relationship and/or an operation type of each cache object in the at least one cache object, Determining a weight value for each cache object in the at least one cache object.
  • the second determining module 650 includes:
  • the obtaining unit 651 is configured to obtain an initial weight value of each cache object in the at least one cache object
  • the first determining unit 652 is configured to determine, according to the hierarchical relationship and the operation type of each cached object, the number of accesses of each cached object in a period of time;
  • a second determining unit 653 configured to determine, according to the initial weight value of each cache object acquired by the obtaining unit 651, and the number of accesses of each cache object determined by the first determining unit 652, determining each cached object Weights.
  • the first determining unit 652 includes:
  • the obtaining sub-unit 6521 is configured to obtain a key relationship of the key KEY or the value VALUE included in each cache object;
  • Determining a sub-unit 6522 configured to store, according to a hierarchical relationship and an operation type of each cache object, a number of accesses corresponding to the operation type of each cache object, and save the buffer object in the buffer according to the hierarchical relationship and the operation type of each cache object
  • the sum of the corresponding access times of the operation type of the lower cache object in the device is determined as the number of accesses of each cache object in a period of time, wherein the number of accesses corresponding to different operation types is different.
  • the number of accesses corresponding to the adding operation or the updating operation of the each cache object or the lower layer cache object is greater than the acquiring operation or deleting the cached object or the lower layer cache object.
  • the number of accesses corresponding to the acquisition operation of each cache object or the lower cache object is greater than the number of accesses corresponding to the delete operation of each cache object or the lower cache object.
  • the second determining unit 653 is configured to determine a weight value of each cached object according to the following equation:
  • (ri) is the weight value of the i-th cache object in the at least one cache object in the time period of ⁇ 1, i is a natural number and i ⁇ l; W i ( ⁇ ) is the i-th cache object The number of accesses in the T1 time period; ⁇ ( ⁇ ) is the initial weight value of the i-th cache object; F is the equilibrium constant.
  • the second determining module 650 is configured to periodically determine a weight value of each cached object, and/or when the number of the at least one cached object or the occupied memory reaches a maximum value.
  • the obtaining module 620 is configured to obtain a weight value of each cached object periodically determined by the second determining module 650, or used to obtain the second determining module 650 at the at least The weight value of each cached object determined when the number of cached objects or the occupied memory reaches a maximum.
  • the apparatus 600 for replacing a cache object may correspond to the application system in the embodiment of the present invention, and the above and other operations and/or functions of the respective modules in the apparatus 600 are respectively implemented in order to implement FIG. 1 to FIG. The corresponding process of each method in 5, for the sake of cleaning, will not be repeated here.
  • the apparatus for replacing the cache object is capable of saving the cache in the buffer by replacing the cached object with the smallest weight value based on the consideration level relationship of each cache object, the operation type, and the weight value of the historical access feature.
  • the cache object that is the least likely to be accessed is removed, that is, the cache object with the smallest value is stored, thereby improving the hit rate of the buffer, thereby improving the performance of the system.
  • the disclosed systems, devices, and methods may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be indirect coupling through some interfaces, devices or units or Communication connections can also be electrical, mechanical or other forms of connection.
  • the components displayed for the unit may or may not be physical units, ie may be located in one place, or may be distributed over multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the embodiments of the present invention.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present invention contributes in essence or to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a removable hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk, and the like, which can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本发明公开了一种替换缓存对象的方法和装置。该方法包括:接收需要存入缓存器的第一缓存对象;在该缓存器中保存的至少一个缓存对象的数量或占用的内存达到最大值时,获取该至少一个缓存对象中每个缓存对象的权重值;根据该每个缓存对象的权重值,确定该至少一个缓存对象中权重值最小的第二缓存对象;将该缓存器中保存的该第二缓存对象替换为该第一缓存对象。该装置包括:接收模块、获取模块、第一确定模块和替换模块。本发明实施例的替换缓存对象的方法和装置,基于每个缓存对象的权重值,通过替换权重值最小的缓存对象,能够去除最不可能被访问的缓存对象,提高缓存器的命中率,从而能够提高系统的性能。

Description

替换緩存对象的方法和装置 技术领域
本发明涉及信息技术领域, 尤其涉及信息技术领域中替换緩存对象的方 法和装置。 背景技术
如今已进入了 Web 2.0高速发展的互联网时代,各种互联网的 Web应用 程序如雨后春夢般出现。 很多 Web应用程序都选择成本更低的数据库用于 系统记录, 以节省数据库授权费用, 同时还可以提升 Web应用程序的性能。 在 Web应用程序中使用緩存技术是解决此问题的一种很好的方法, 该方法 可以在不必大幅度提升磁盘开销的情况下, 让数据尽可能的靠近应用, 从而 可以提高 Web应用程序的性能。
緩存技术中的一个关键问题是替换緩存对象的方法。 由于緩存器的存储 容量有限, 当存储区被占满后, 新的緩存对象就无法存储, 这时需要将一部 分当前存储价值低的緩存对象替换出去。 因此, 替换緩存对象的方法的好坏 也决定了緩存器的命中率等指标。
目前, 緩存的实现大多数都 于传统的替换方法, 其中使用比较广泛 的是最近最少使用 ( Least Recently Used, 筒称为 "LRU" ) 方法, 以及最不 经常使用 ( Least Frequently Used, 筒称为 "LFU" )方法。 该 LRU方法是将 一段时间内最久没有被访问的对象给替换出緩存器, 其基本思想是利用局部 性原理, 即最近被访问的对象, 很可能在后面也被访问, 反过来说, 已经很 久没有被访问的对象, 很可能在未来较长的一段时间内也不会被访问。 该 LRU方法依据最近一次的访问时间, 能够较好地适应数据访问的变化,但是 该方法没有考虑数据长期历史的访问特性, 也没有考虑对象之间的相互关 系, 緩存器的命中率较低。
该 LFU方法将访问次数最少的对象替换出緩存器, 即将过去使用频率 高的数据优先保存在緩存器中, 其基本思想是最近被访问次数最多的对象就 越可能再次被访问。 该 LFU方法使用数据的访问频率, 有利于数据的总体 优化使用, 但该方法可能使得开始被频繁访问的对象长期驻留在緩存器中, 影响緩存器的命中率。 因此, 需要一种技术方案能够替换緩存对象, 并提高緩存器的命中率。 发明内容
本发明实施例提供了一种替换緩存对象的方法和装置, 能够在緩存器中 保存的緩存对象的数量或占用的内存达到最大值时替换緩存对象, 并提高緩 存器的命中率。
一方面, 本发明实施例提供了一种替换緩存对象的方法, 该方法包括: 接收需要存入緩存器的第一緩存对象; 在该緩存器中保存的至少一个緩存对 象的数量或占用的内存达到最大值时, 获取该至少一个緩存对象中每个緩存 对象的权重值; 根据该每个緩存对象的权重值, 确定该至少一个緩存对象中 权重值最小的第二緩存对象; 将该緩存器中保存的该第二緩存对象替换为该 第一緩存对象。
另一方面,本发明实施例提供了一种替换緩存对象的装置,该装置包括: 接收模块, 用于接收需要存入緩存器的第一緩存对象; 获取模块, 用于在该 緩存器中保存的至少一个緩存对象的数量或占用的内存达到最大值时,获取 该至少一个緩存对象中每个緩存对象的权重值; 第一确定模块, 用于根据该 获取模块获取的该每个緩存对象的权重值,确定该至少一个緩存对象中权重 值最小的第二緩存对象; 替换模块, 用于将该緩存器中保存的该第一确定模 块确定的该第二緩存对象替换为该接收模块接收的该第一緩存对象。
基于上述技术方案, 本发明实施例的替换緩存对象的方法和装置, 基于 每个緩存对象的权重值, 通过替换权重值最小的緩存对象, 能够在緩存器中 保存的緩存对象的数量或占用的内存达到最大值时,去除最不可能被访问的 緩存对象, 提高緩存器的命中率, 从而能够提高系统的性能。 附图说明
为了更清楚地说明本发明实施例的技术方案, 下面将对本发明实施例中 所需要使用的附图作筒单地介绍, 显而易见地, 下面所描述的附图仅仅是本 发明的一些实施例, 对于本领域普通技术人员来讲, 在不付出创造性劳动的 前提下, 还可以根据这些附图获得其他的附图。
图 1是根据本发明实施例的替换緩存对象的方法的示意性流程图。
图 2是根据本发明实施例的替换緩存对象的方法的另一示意性流程图。 图 3是根据本发明实施例的緩存对象之间的层级结构示意图。
图 4是根据本发明实施例的确定緩存对象的权重值的方法的示意性流程 图。
图 5是根据本发明实施例的确定緩存对象的访问次数的方法的示意性流 程图。
图 6是根据本发明实施例的替换緩存对象的装置的示意性框图。
图 7是根据本发明实施例的替换緩存对象的装置的另一示意性框图。 图 8是根据本发明实施例的第二确定模块的示意性框图。
图 9是根据本发明实施例的第二确定单元的示意性框图。 具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行 清楚、 完整地描述, 显然, 所描述的实施例是本发明的一部分实施例, 而不 是全部实施例。 基于本发明中的实施例, 本领域普通技术人员在没有做出创 造性劳动的前提下所获得的所有其他实施例, 都应属于本发明保护的范围。
图 1示出了根据本发明实施例的替换緩存对象的方法 100的示意性流程 图。 如图 1所示, 该方法 100包括:
S110, 接收需要存入緩存器的第一緩存对象;
S120,在该緩存器中保存的至少一个緩存对象的数量或占用的内存达到 最大值时, 获取该至少一个緩存对象中每个緩存对象的权重值;
S130, 根据该每个緩存对象的权重值, 确定该至少一个緩存对象中权重 值最小的第二緩存对象;
S140, 将该緩存器中保存的该第二緩存对象替换为该第一緩存对象。 在例如为 WEB应用程序的应用系统中, 当接收的第一緩存对象需要存 储到緩存器中时,如果緩存器中保存的至少一个緩存对象的数量或占用的内 存达到最大值, 那么应用系统可以获取该至少一个緩存对象中每个緩存对象 的权重值, 并可以根据该每个緩存对象的权重值, 确定该至少一个緩存对象 中权重值最小的第二緩存对象,从而该应用系统可以将该緩存器中保存的该 第二緩存对象替换为该第一緩存对象。 由此使得緩存器能够将存储价值相对 较低的緩存对象替换出去, 并存储新接收的緩存对象。
因此, 本发明实施例的替换緩存对象的方法, 基于每个緩存对象的权重 值, 通过替换权重值最小的緩存对象, 能够在緩存器中保存的緩存对象的数 量或占用的内存达到最大值时, 去除最不可能被访问的緩存对象, 提高緩存 器的命中率, 从而能够提高系统的性能。
应理解, 在緩存器中保存的至少一个緩存对象的数量没有达到最大值, 并且该至少一个緩存对象占用的内存没有达到最大值时,如果有新的緩存对 象需要存入緩存器, 那么可以将该新的緩存对象直接存储到緩存器; 当然, 在緩存器中保存的至少一个緩存对象的数量或占用的内存没有达到最大值 时, 也可以根据本发明实施例确定存储器中权重值最小的緩存对象, 并将该 权重值最小的緩存对象替换出去, 以及存储该新的緩存对象。
在本发明实施例中, 如图 2所示, 该方法 100还可以包括:
S150, 根据该至少一个緩存对象中各緩存对象的层级关系和 /或操作类 型, 确定该至少一个緩存对象中每个緩存对象的权重值。
可选地, 该确定该至少一个緩存对象中每个緩存对象的权重值, 包括: 周期性确定该每个緩存对象的权重值,和 /或在该至少一个緩存对象的数量或 占用的内存达到最大值时, 确定该每个緩存对象的权重值; 该获取该至少一 个緩存对象中每个緩存对象的权重值, 包括: 获取周期性确定的该每个緩存 对象的权重值, 或获取在该至少一个緩存对象的数量或占用的内存达到最大 值时确定的该每个緩存对象的权重值。
应理解, 在本发明实施例中, 应用系统也可以在有緩存对象需要存入緩 存器时, 确定緩存器中的每个緩存对象的权重值, 从而可以确定权重值最小 的緩存对象并进行替换处理。 还应理解, 还可以根据緩存对象的其他属性确 定各緩存对象的权重值, 以替换出存储价值较小的緩存对象。
在本发明实施例中, 应用系统中的被緩存的对象通常不是孤立存在的, 这些对象之间可以构建成层级结构, 例如, 各对象可以构成树型层级结构, 或各对象可以构成一对多的层级结构, 并且通常而言, 访问下层对象的同时 也可能会访问上层对象。 例如, 各緩存对象 Ul、 Tl、 T2、 Ml、 Rl、 R2、 R3、 Dl、 D2、 D3和 D4可以具有如图 3所示的层级结构, U1可以表示用户, Tl、 Τ2可以表示表信息, Rl、 R2、 R3可以表示该表的行信息, Dl、 D2、 D3可以表示行中的数据信息。 当 D1被访问时, Ul、 T1和 R1也很有可能 被访问。 因此, 本发明实施例在确定緩存对象的权重值时, 考虑緩存器中各 緩存对象之间的层级关系, 能够提高上层的緩存对象的命中率, 从而能够进 一步提升应用系统的性能。
另外, 在本发明实施例中, 一般地, 对緩存对象的操作类型不同, 该緩 存对象将来被访问的概率或可能性不同。例如,对緩存对象进行更新操作后, 很可能对该緩存对象进行查询操作。 因此, 本发明实施例在确定緩存对象的 权重值时,考虑緩存器中各緩存对象的操作类型,能够预测将来的访问情况, 从而能够提高緩存对象的命中率, 并能够进一步提升应用系统的性能。
因此, 本发明实施例的替换緩存对象的方法, 通过根据各緩存对象的层 级关系和 /或操作类型,确定每个緩存对象的权重值,并替换权重值最小的緩 存对象, 能够在緩存器中保存的緩存对象的数量或占用的内存达到最大值 时, 去除最不可能被访问的緩存对象, 并能够进一步提高緩存器对緩存对象 的命中率, 从而能够进一步提高系统的性能。
在本发明实施例中,应用系统可以根据各緩存对象的层级关系确定各緩 存对象的权重值, 例如, 当下层的緩存对象被访问时, 可以增加上层的緩存 对象的权重; 应用系统也可以根据各緩存对象的操作类型确定各緩存对象的 权重值, 例如, 对緩存对象的不同类型的操作, 可以对緩存对象设置不同的 权重值。
优选地, 应用系统可以根据各緩存对象的层级关系和操作类型, 确定各 緩存对象的权重值。 下面将结合图 4详细描述确定该至少一个緩存对象中每 个緩存对象的权重值的方法 150。
如图 4所示, 该方法 150包括:
5151 , 获取该至少一个緩存对象中每个緩存对象的初始权重值;
5152, 根据该每个緩存对象的层级关系和操作类型, 确定一段时间内该 每个緩存对象的访问次数;
5153 , 根据该每个緩存对象的初始权重值和该每个緩存对象的访问次 数, 确定该每个緩存对象的权重值。
在 S151 中, 应用系统获取该至少一个緩存对象中每个緩存对象的初始 权重值。 该初始权重值可以设置为常数, 例如, 将各緩存对象的初始权重值 设置为 0; 也可以将各緩存对象的历史权重值设置为当前一段时间内的初始 权重值。 例如, 当应用系统周期性地确定各緩存对象的权重值时, 可以将第 N ( N为自然数)个周期确定的各緩存对象的权重值, 设置为第 N+1个周期 中相应的緩存对象的权重值。 因此, 该权重值还考虑了緩存对象的历史访问 特性, 能够更加全面地确定各緩存对象的存储价值, 从而能够进一步提高緩 存对象的命中率, 并提升应用系统的性能。
在 S152中, 应用系统根据该每个緩存对象的层级关系和操作类型, 确 定一段时间内该每个緩存对象的访问次数。
例如, 当下层的緩存对象被访问时, 应用系统可以增加上层的緩存对象 的访问次数; 应用系统也可以对緩存对象的不同类型的操作, 对緩存对象设 置不同的访问次数。
具体而言,如图 5所示,确定緩存对象的访问次数的方法 152可以包括: S1521 ,获取该每个緩存对象的键 KEY或值 VALUE中包括的层级关系; S1522, 根据一段时间内该每个緩存对象的该层级关系和操作类型, 将 与该每个緩存对象的操作类型相应的访问次数, 以及与该每个緩存对象的保 存在该緩沖器中的下层緩存对象的操作类型相应的访问次数之和,确定为一 段时间内该每个緩存对象的访问次数, 其中与不同操作类型相应的访问次数 不同。
在 S1521中, 应用系统可以根据緩存对象的键 KEY或值 VALUE, 获取 緩存对象的层级关系。 例如, 在緩存对象 D1的键 KEY或值 VALUE中, 以 "U1.T1.R1.D1"表示该緩存对象 D1的上层緩存对象为 R1 ,该 R1的上层緩 存对象为 T1 , 该 T1的上层緩存对象为 Ul。
在 S1522中,应用系统根据一段时间内该每个緩存对象的该层级关系和 操作类型, 将与该每个緩存对象的操作类型相应的访问次数, 以及与该每个 緩存对象的保存在该緩沖器中的下层緩存对象的操作类型相应的访问次数 之和, 确定为一段时间内该每个緩存对象的访问次数, 其中与不同操作类型 相应的访问次数不同。
可选地, 与不同操作类型相应的访问次数不同, 包括: 与该每个緩存对 象或该下层緩存对象的添加 ( PUT )操作或更新 ( UPDATE )操作相应的访 问次数, 大于与该每个緩存对象或该下层緩存对象的获取 ( GET )操作或删 除(DELETE )操作相应的访问次数; 与该每个緩存对象或该下层緩存对象 的获取操作相应的访问次数, 大于与该每个緩存对象或该下层緩存对象的删 除操作相应的访问次数。
例如, 4叚设緩存对象 Dl、 Rl、 Tl、 Ul具有如图 3所示的层级结构, 当 对緩存对象 D1进行 GET操作或 PUT操作时, 各緩存对象 Dl、 Rl、 Tl、 Ul的访问次数变化情况如表 1所示 <
表 1
Figure imgf000009_0001
在 S153 中, 应用系统根据该每个緩存对象的初始权重值和该每个緩存 对象的访问次数, 确定该每个緩存对象的权重值。 可选地, 每个緩存对象的 权重值由下列等式( 1 )确定:
C, (Π) = Wt (TV, + Ci (T0)/F ( 1 ) 其中, c,(ri)为该至少一个緩存对象中的第 i个緩存对象在 Γ1时间段内 的权重值, i为自然数且 i≥l ; Wi (Τΐ)为该第 i个緩存对象在 T1时间段内的访 问次数; C,(ro)为该第 i个緩存对象的初始权重值; F为平衡常量。
应理解, 该平衡常数 F用于根据具有的应用场景, 动态调整緩存对象的 历史访问特征和一段时间内緩存对象的访问次数的比重, 以适应不同的应用 场景。 例如, 当应用场景更适合于以历史访问特征评价緩存对象的存储价值 时, 可以将 F设置较大的值; 当应用场景更适合于以一段时间内的访问次数 评价緩存对象的存储价值时, 可以将 F设置相对较小的值。
应理解, 在本发明的各种实施例中, 上述各过程的序号的大小并不意味 着执行顺序的先后, 各过程的执行顺序应以其功能和内在逻辑确定, 而不应 对本发明实施例的实施过程构成任何限定。
因此, 本发明实施例的替换緩存对象的方法, 基于每个緩存对象的考虑 层级关系、 操作类型、 历史访问特征的权重值, 通过替换权重值最小的緩存 对象, 能够在緩存器中保存的緩存对象的数量或占用的内存达到最大值时, 去除最不可能被访问的緩存对象, 即存储价值最小的緩存对象, 由此能够提 高緩存器的命中率, 从而能够提高系统的性能。
下面将结合图 3所示的緩存对象层次构架图, 以上述等式(1 )所示的 确定权重值的方法为例, 对本发明实施例进行具体描述。
假设緩存器只能緩存 8个对象,应用系统周期性地确定每个緩存对象的 权重值, 例如每 10分钟为一个周期, 初始的权重值为上一周期确定的权重 值, 其中第一周期的初始权重值为 0, F设置为 2, 20分钟内对象的访问次 序依次为: GET Ul、 GET Ul.Tl、 GET U1.T2、 UPDATE U1.T1.R1、 PUT U1.T1.R1.D1、 GET Ul.Tl. R2.D2、 GET U1.T2.R3 , UPDATE U1.T2、 GET Ul.Tl .R2和 GET U1.T2.R3.D3 , 并且对緩存对象进行添加 PUT操作或更新 UPDATE操作时, 相应的访问次数增加值为 2; 对緩存对象进行获取 GET 操作时,相应的访问次数增加值为 1;对緩存对象进行删除 DELETE操作时, 相应的访问次数增加值为 0。
在第一个 10分钟内, 緩存器中各緩存对象的访问次数如表 2所示。
表 2
Figure imgf000010_0001
10分钟后, 应用系统周期性地确定所述每个緩存对象的权重值, 此时, 緩存器中各緩存对象的权重值如表 3所示。
表 3
Figure imgf000010_0002
U1.T1.R1.D1 C(T10)=2
U1.T1.R1 C(T10)=4
U1.T1.R2.D2 C(T10)=1 在第二个 10分钟内, 緩存器中各緩存对象的访问次数重新计算, 并且 如表 4所示。
表 4
Figure imgf000011_0001
当应用系统接收需要存入緩存器的緩存对象 D3时, 緩存器中保存的緩 存对象的数量达到最大值, 此时, 应用系统确定的每个緩存对象的权重值如 表 5所示。
表 5
Figure imgf000011_0002
由表 5可以确定緩存器中权重值最小的緩存对象为 D2, 于是应用系统 可以将该緩存器中保存的緩存对象 D2替换为緩存对象 D3。 随后,在下一个 周期中, 緩存器中各緩存对象的权重值和访问次数如表 6所示。
表 6 緩存对象 权重值 访问次数
Ul C(Tm)= 8 W(U1)=1
U1.T1 C(Tm)= 4 W(U1.T1)=0
U1.T2 C(Tm)= 3.5 W(U1.T2)=1
U1.T1.R1.D1 C(Tm)= 1 W(U1.T1.R1.D1)=0
U1.T1.R1 C(Tm)= 2 W(U1.T1.R1)=0
U1.T2.R3 C(Tm)= 1 W(U1.T2.R3)=1
U1.T1.R2 C(Tm)= 1.5 W(U1.T1.R2)=0
U1.T2.R3.D3 C(Tm)=0 W(U1.T2.R3.D3)=1 因此, 本发明实施例的替换緩存对象的方法, 基于每个緩存对象的考虑 层级关系、 操作类型、 历史访问特征的权重值, 通过替换权重值最小的緩存 对象, 能够在緩存器中保存的緩存对象的数量或占用的内存达到最大值时, 去除最不可能被访问的緩存对象, 即存储价值最小的緩存对象, 由此能够提 高緩存器的命中率, 从而能够提高系统的性能。
上文中结合图 1至图 5, 详细描述了根据本发明实施例的替换緩存对象 的方法, 下面将结合图 6至图 9, 详细描述根据本发明实施例的替换緩存对 象的装置。
图 6示出了根据本发明实施例的替换緩存对象的装置 600的示意性框 图。 如图 6所示, 该装置 600包括:
接收模块 610, 用于接收需要存入緩存器的第一緩存对象;
获取模块 620, 用于在该緩存器中保存的至少一个緩存对象的数量或占 用的内存达到最大值时, 获取该至少一个緩存对象中每个緩存对象的权重 值;
第一确定模块 630, 用于根据该获取模块 620获取的该每个緩存对象的 权重值, 确定该至少一个緩存对象中权重值最小的第二緩存对象;
替换模块 640, 用于将该緩存器中保存的该第一确定模块 630确定的该 第二緩存对象替换为该接收模块 610接收的该第一緩存对象。
本发明实施例的替换緩存对象的装置, 基于每个緩存对象的权重值, 通 过替换权重值最小的緩存对象, 能够在緩存器中保存的緩存对象的数量或占 用的内存达到最大值时, 去除最不可能被访问的緩存对象, 提高緩存器的命 中率, 从而能够提高系统的性能。
在本发明实施例中, 可选地, 如图 7所示, 该装置 600还包括: 第二确定模块 650, 用于根据该至少一个緩存对象中各緩存对象的层级 关系和 /或操作类型, 确定该至少一个緩存对象中每个緩存对象的权重值。
可选地, 如图 8所示, 该第二确定模块 650包括:
获取单元 651 , 用于获取该至少一个緩存对象中每个緩存对象的初始权 重值;
第一确定单元 652, 用于根据该每个緩存对象的层级关系和操作类型, 确定一段时间内该每个緩存对象的访问次数;
第二确定单元 653, 用于根据该获取单元 651获取的该每个緩存对象的 初始权重值, 和该第一确定单元 652确定的该每个緩存对象的访问次数, 确 定该每个緩存对象的权重值。
可选地, 如图 9所示, 该第一确定单元 652包括:
获取子单元 6521 , 用于获取该每个緩存对象的键 KEY或值 VALUE中 包括的层级关系;
确定子单元 6522,用于根据一段时间内该每个緩存对象的层级关系和操 作类型, 将与该每个緩存对象的操作类型相应的访问次数, 以及与该每个緩 存对象的保存在该緩沖器中的下层緩存对象的操作类型相应的访问次数之 和, 确定为一段时间内该每个緩存对象的访问次数, 其中与不同操作类型相 应的访问次数不同。
在本发明实施例中, 可选地, 与该每个緩存对象或该下层緩存对象的添 加操作或更新操作相应的访问次数, 大于与该每个緩存对象或该下层緩存对 象的获取操作或删除操作相应的访问次数; 与该每个緩存对象或该下层緩存 对象的获取操作相应的访问次数, 大于与该每个緩存对象或该下层緩存对象 的删除操作相应的访问次数。
在本发明实施例中, 可选地, 该第二确定单元 653用于根据下列等式确 定该每个緩存对象的权重值:
Ci (Tl) = Wi (Tl) + Ci (T0)/F ,
其中, c,(ri)为该至少一个緩存对象中的第 i个緩存对象在 Γ1时间段内 的权重值, i为自然数且 i≥l ; Wi (Τΐ)为该第 i个緩存对象在 T1时间段内的访 问次数; ς (ΓΟ)为该第 i个緩存对象的初始权重值; F为平衡常量。 在本发明实施例中, 可选地, 该第二确定模块 650用于周期性确定该每 个緩存对象的权重值,和 /或在该至少一个緩存对象的数量或占用的内存达到 最大值时, 确定该每个緩存对象的权重值; 该获取模块 620用于获取该第二 确定模块 650周期性确定的该每个緩存对象的权重值, 或用于获取该第二确 定模块 650在该至少一个緩存对象的数量或占用的内存达到最大值时确定的 该每个緩存对象的权重值。
应理解,根据本发明实施例的替换緩存对象的装置 600可对应于本发明 实施例中的应用系统,并且装置 600中的各个模块的上述和其它操作和 /或功 能分别为了实现图 1至图 5中的各个方法的相应流程, 为了筒洁, 在此不再 赘述。
因此, 本发明实施例的替换緩存对象的装置, 基于每个緩存对象的考虑 层级关系、 操作类型、 历史访问特征的权重值, 通过替换权重值最小的緩存 对象, 能够在緩存器中保存的緩存对象的数量或占用的内存达到最大值时, 去除最不可能被访问的緩存对象, 即存储价值最小的緩存对象, 由此能够提 高緩存器的命中率, 从而能够提高系统的性能。
本领域普通技术人员可以意识到, 结合本文中所公开的实施例描述的各 示例的单元及算法步骤, 能够以电子硬件、 计算机软件或者二者的结合来实 现, 为了清楚地说明硬件和软件的可互换性, 在上述说明中已经按照功能一 般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执 行, 取决于技术方案的特定应用和设计约束条件。 专业技术人员可以对每个 特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超 出本发明的范围。
所属领域的技术人员可以清楚地了解到, 为了描述的方便和筒洁, 上述 描述的系统、 装置和单元的具体工作过程, 可以参考前述方法实施例中的对 应过程, 在此不再赞述。
在本申请所提供的几个实施例中, 应该理解到, 所揭露的系统、 装置和 方法, 可以通过其它的方式实现。 例如, 以上所描述的装置实施例仅仅是示 意性的, 例如, 所述单元的划分, 仅仅为一种逻辑功能划分, 实际实现时可 以有另外的划分方式, 例如多个单元或组件可以结合或者可以集成到另一个 系统, 或一些特征可以忽略, 或不执行。 另外, 所显示或讨论的相互之间的 耦合或直接耦合或通信连接可以是通过一些接口、装置或单元的间接耦合或 通信连接, 也可以是电的, 机械的或其它的形式连接。 为单元显示的部件可以是或者也可以不是物理单元, 即可以位于一个地方, 或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或 者全部单元来实现本发明实施例方案的目的。
另外, 在本发明各个实施例中的各功能单元可以集成在一个处理单元 中, 也可以是各个单元单独物理存在, 也可以是两个或两个以上单元集成在 一个单元中。 上述集成的单元既可以采用硬件的形式实现, 也可以采用软件 功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销 售或使用时, 可以存储在一个计算机可读取存储介质中。 基于这样的理解, 本发明的技术方案本质上或者说对现有技术做出贡献的部分, 或者该技术方 案的全部或部分可以以软件产品的形式体现出来, 该计算机软件产品存储在 一个存储介质中, 包括若干指令用以使得一台计算机设备(可以是个人计算 机, 服务器, 或者网络设备等)执行本发明各个实施例所述方法的全部或部 分步骤。 而前述的存储介质包括: U盘、 移动硬盘、 只读存储器(ROM, Read-Only Memory )、 随机存取存储器 ( RAM, Random Access Memory )、 磁碟或者光盘等各种可以存储程序代码的介质。
以上所述, 仅为本发明的具体实施方式, 但本发明的保护范围并不局限 于此, 任何熟悉本技术领域的技术人员在本发明揭露的技术范围内, 可轻易 想到各种等效的修改或替换, 这些修改或替换都应涵盖在本发明的保护范围 之内。 因此, 本发明的保护范围应以权利要求的保护范围为准。

Claims

1、 一种替换緩存对象的方法, 其特征在于, 包括:
接收需要存入緩存器的第一緩存对象;
在所述緩存器中保存的至少一个緩存对象的数量或占用的内存达到最 大值时, 获取所述至少一个緩存对象中每个緩存对象的权重值;
根据所述每个緩存对象的权重值,确定所述至少一个緩存对象中权重值 最小的第二緩存对象;
将所述緩存器中保存的所述第二緩存对象替换为所述第一緩存对象。
2、 根据权利要求 1所述的方法, 其特征在于, 所述方法还包括: 根据所述至少一个緩存对象中各緩存对象的层级关系和 /或操作类型,确 定所述至少一个緩存对象中每个緩存对象的权重值。
3、 根据权利要求 2所述的方法, 其特征在于, 所述确定所述至少一个 緩存对象中每个緩存对象的权重值, 包括:
获取所述至少一个緩存对象中每个緩存对象的初始权重值;
根据所述每个緩存对象的层级关系和操作类型,确定一段时间内所述每 个緩存对象的访问次数;
根据所述每个緩存对象的初始权重值和所述每个緩存对象的访问次数, 确定所述每个緩存对象的权重值。
4、 根据权利要求 3所述的方法, 其特征在于, 所述确定一段时间内所 述每个緩存对象的访问次数, 包括:
获取所述每个緩存对象的键 KEY或值 VALUE中包括的层级关系; 根据一段时间内所述每个緩存对象的所述层级关系和操作类型,将与所 述每个緩存对象的操作类型相应的访问次数, 以及与所述每个緩存对象的保 存在所述緩沖器中的下层緩存对象的操作类型相应的访问次数之和,确定为 一段时间内所述每个緩存对象的访问次数, 其中与不同操作类型相应的访问 次数不同。
5、 根据权利要求 4所述的方法, 其特征在于, 所述与不同操作类型相 应的访问次数不同, 包括:
与所述每个緩存对象或所述下层緩存对象的添加操作或更新操作相应 的访问次数, 大于与所述每个緩存对象或所述下层緩存对象的获取操作或删 除操作相应的访问次数; 与所述每个緩存对象或所述下层緩存对象的获取操 作相应的访问次数, 大于与所述每个緩存对象或所述下层緩存对象的删除操 作相应的访问次数。
6、 根据权利要求 3所述的方法, 其特征在于, 所述每个緩存对象的权 重值由下列等式确定:
C, (Π) = Wt (TV, + Ci (TO) /F ,
其中, c, (ri)为所述至少一个緩存对象中的第 i个緩存对象在 Γ1时间段 内的权重值, i为自然数且 ≥l ; (ri)为所述第 i个緩存对象在 Γ1时间段内 的访问次数; c, (ro)为所述第 i个緩存对象的初始权重值; F为平衡常量。
7、 根据权利要求 2至 6中任一项所述的方法, 其特征在于, 所述确定 所述至少一个緩存对象中每个緩存对象的权重值, 包括:
周期性确定所述每个緩存对象的权重值,和 /或在所述至少一个緩存对象 的数量或占用的内存达到最大值时, 确定所述每个緩存对象的权重值;
所述获取所述至少一个緩存对象中每个緩存对象的权重值, 包括: 获取周期性确定的所述每个緩存对象的权重值, 或获取在所述至少一个 緩存对象的数量或占用的内存达到最大值时确定的所述每个緩存对象的权 重值。
8、 一种替换緩存对象的装置, 其特征在于, 包括:
接收模块, 用于接收需要存入緩存器的第一緩存对象;
获耳 ^莫块, 用于在所述緩存器中保存的至少一个緩存对象的数量或占用 的内存达到最大值时, 获取所述至少一个緩存对象中每个緩存对象的权重 值;
第一确定模块,用于根据所述获取模块获取的所述每个緩存对象的权重 值, 确定所述至少一个緩存对象中权重值最小的第二緩存对象;
替换模块, 用于将所述緩存器中保存的所述第一确定模块确定的所述第 二緩存对象替换为所述接收模块接收的所述第一緩存对象。
9、 根据权利要求 8所述的装置, 其特征在于, 所述装置还包括: 第二确定模块,用于根据所述至少一个緩存对象中各緩存对象的层级关 系和 /或操作类型, 确定所述至少一个緩存对象中每个緩存对象的权重值。
10、根据权利要求 9所述的装置,其特征在于,所述第二确定模块包括: 获取单元, 用于获取所述至少一个緩存对象中每个緩存对象的初始权重 值; 第一确定单元, 用于根据所述每个緩存对象的层级关系和操作类型, 确 定一段时间内所述每个緩存对象的访问次数;
第二确定单元,用于根据所述获取单元获取的所述每个緩存对象的初始 权重值, 和所述第一确定单元确定的所述每个緩存对象的访问次数, 确定所 述每个緩存对象的权重值。
11、 根据权利要求 10所述的装置, 其特征在于, 所述第一确定单元包 括:
获取子单元, 用于获取所述每个緩存对象的键 KEY或值 VALUE中包 括的层级关系;
确定子单元, 用于根据一段时间内所述每个緩存对象的所述层级关系和 操作类型, 将与所述每个緩存对象的操作类型相应的访问次数, 以及与所述 每个緩存对象的保存在所述緩沖器中的下层緩存对象的操作类型相应的访 问次数之和, 确定为一段时间内所述每个緩存对象的访问次数, 其中与不同 操作类型相应的访问次数不同。
12、 根据权利要求 10所述的装置, 其特征在于, 所述第二确定单元用 于根据下列等式确定所述每个緩存对象的权重值:
Ci (Tl) = Wi (Tl) + Ci (T0)/F ,
其中, c,(ri)为所述至少一个緩存对象中的第 i个緩存对象在 Γ1时间段 内的权重值, i为自然数且 ≥l ; (ri)为所述第 i个緩存对象在 Γ1时间段内 的访问次数; ς (ΓΟ)为所述第 i个緩存对象的初始权重值; F为平衡常量。
13、 根据权利要求 9至 12中任一项所述的装置, 其特征在于, 所述第 二确定模块用于周期性确定所述每个緩存对象的权重值,和 /或在所述至少一 个緩存对象的数量或占用的内存达到最大值时,确定所述每个緩存对象的权 重值; 所述获取模块用于获取所述第二确定模块周期性确定的所述每个緩存 对象的权重值, 或用于获取所述第二确定模块在所述至少一个緩存对象的数 量或占用的内存达到最大值时确定的所述每个緩存对象的权重值。
PCT/CN2011/083896 2011-12-13 2011-12-13 替换缓存对象的方法和装置 WO2013086689A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2011/083896 WO2013086689A1 (zh) 2011-12-13 2011-12-13 替换缓存对象的方法和装置
CN201180003186.0A CN103548005B (zh) 2011-12-13 2011-12-13 替换缓存对象的方法和装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2011/083896 WO2013086689A1 (zh) 2011-12-13 2011-12-13 替换缓存对象的方法和装置

Publications (1)

Publication Number Publication Date
WO2013086689A1 true WO2013086689A1 (zh) 2013-06-20

Family

ID=48611801

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2011/083896 WO2013086689A1 (zh) 2011-12-13 2011-12-13 替换缓存对象的方法和装置

Country Status (2)

Country Link
CN (1) CN103548005B (zh)
WO (1) WO2013086689A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220188A (zh) * 2017-05-31 2017-09-29 莫倩 一种自适应缓冲块替换方法
CN109101580A (zh) * 2018-07-20 2018-12-28 北京北信源信息安全技术有限公司 一种基于Redis的热点数据缓存方法和装置

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105094686B (zh) * 2014-05-09 2018-04-10 华为技术有限公司 数据缓存方法、缓存和计算机系统
CN104333594B (zh) * 2014-11-05 2018-07-27 无锡成电科大科技发展有限公司 基于光传输网络的云平台资源采集加速方法和系统
US11687244B2 (en) * 2019-10-24 2023-06-27 Micron Technology, Inc. Quality of service for memory devices using weighted memory access operation types
CN111552652B (zh) * 2020-07-13 2020-11-17 深圳鲲云信息科技有限公司 基于人工智能芯片的数据处理方法、装置和存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6266742B1 (en) * 1997-10-27 2001-07-24 International Business Machines Corporation Algorithm for cache replacement
US20030110357A1 (en) * 2001-11-14 2003-06-12 Nguyen Phillip V. Weight based disk cache replacement method
US20060282620A1 (en) * 2005-06-14 2006-12-14 Sujatha Kashyap Weighted LRU for associative caches
US20070198779A1 (en) * 2005-12-16 2007-08-23 Qufei Wang System and method for cache management
CN101184209A (zh) * 2007-12-12 2008-05-21 中山大学 一种数字家庭中vod客户端代理缓存服务器
CN101232464A (zh) * 2008-02-28 2008-07-30 清华大学 基于时间权参数的p2p实时流媒体缓存替换方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1322430C (zh) * 2003-11-24 2007-06-20 佛山市顺德区顺达电脑厂有限公司 高速缓存代换方法
CN100419715C (zh) * 2005-11-25 2008-09-17 华为技术有限公司 嵌入式处理器系统及其数据操作方法
CN101551781B (zh) * 2009-05-22 2011-03-30 中国科学院计算技术研究所 一种p2p视频点播系统中的硬盘缓存替换方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6266742B1 (en) * 1997-10-27 2001-07-24 International Business Machines Corporation Algorithm for cache replacement
US20030110357A1 (en) * 2001-11-14 2003-06-12 Nguyen Phillip V. Weight based disk cache replacement method
US20060282620A1 (en) * 2005-06-14 2006-12-14 Sujatha Kashyap Weighted LRU for associative caches
US20070198779A1 (en) * 2005-12-16 2007-08-23 Qufei Wang System and method for cache management
CN101184209A (zh) * 2007-12-12 2008-05-21 中山大学 一种数字家庭中vod客户端代理缓存服务器
CN101232464A (zh) * 2008-02-28 2008-07-30 清华大学 基于时间权参数的p2p实时流媒体缓存替换方法

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220188A (zh) * 2017-05-31 2017-09-29 莫倩 一种自适应缓冲块替换方法
CN107220188B (zh) * 2017-05-31 2020-10-27 中山大学 一种自适应缓冲块替换方法
CN109101580A (zh) * 2018-07-20 2018-12-28 北京北信源信息安全技术有限公司 一种基于Redis的热点数据缓存方法和装置

Also Published As

Publication number Publication date
CN103548005B (zh) 2016-03-30
CN103548005A (zh) 2014-01-29

Similar Documents

Publication Publication Date Title
TWI684099B (zh) 剖析快取替代
EP3507694B1 (en) Message cache management for message queues
WO2013086689A1 (zh) 替换缓存对象的方法和装置
JP2018077868A (ja) システム、データベースクエリを実行するための方法及びコンピュータに読み取り可能な記録媒体
US20110167239A1 (en) Methods and apparatuses for usage based allocation block size tuning
US9501419B2 (en) Apparatus, systems, and methods for providing a memory efficient cache
WO2015112249A1 (en) Methods for combining access history and sequentiality for intelligent prefetching and devices thereof
CN103366016A (zh) 基于hdfs的电子文件集中存储及优化方法
CN107888687B (zh) 一种基于分布式存储系统的代理客户端存储加速方法及系统
JP2021039585A (ja) クライアント又はサーバとの接続を制御する方法
CN105915619A (zh) 顾及访问热度的网络空间信息服务高性能内存缓存方法
EP3274844B1 (en) Hierarchical cost based caching for online media
US7529891B2 (en) Balanced prefetching exploiting structured data
US10067678B1 (en) Probabilistic eviction of partial aggregation results from constrained results storage
EP3207457B1 (en) Hierarchical caching for online media
CN112506875B (zh) 文件存储方法、相关装置及文件存储系统
Bžoch et al. Towards caching algorithm applicable to mobile clients
WO2017049488A1 (zh) 一种缓存管理方法和装置
Bžoch et al. Design and implementation of a caching algorithm applicable to mobile clients
CN105740167B (zh) 一种文件系统缓存删除的方法及系统
CN112445794A (zh) 一种大数据系统的缓存方法
CN113254366B (zh) 一种基于时空老化模型的服务端瓦片缓存置换方法
US20230342300A1 (en) Data eviction method and apparatus, cache node, and cache system
Tracey et al. CacheL-A Cache Algorithm using Leases for Node Data in the Internet of Things
CN118035292A (zh) 一种数据存储系统以及缓存管理方法、装置、设备及系统

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201180003186.0

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11877440

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11877440

Country of ref document: EP

Kind code of ref document: A1