CN103168293A - Method and system for inserting cache blocks - Google Patents

Method and system for inserting cache blocks Download PDF

Info

Publication number
CN103168293A
CN103168293A CN2011800498929A CN201180049892A CN103168293A CN 103168293 A CN103168293 A CN 103168293A CN 2011800498929 A CN2011800498929 A CN 2011800498929A CN 201180049892 A CN201180049892 A CN 201180049892A CN 103168293 A CN103168293 A CN 103168293A
Authority
CN
China
Prior art keywords
cache blocks
buffer memory
section
estimation
probation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011800498929A
Other languages
Chinese (zh)
Other versions
CN103168293B (en
Inventor
G·F·斯沃特
D·温格沃弗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oracle International Corp
Original Assignee
Oracle International Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oracle International Corp filed Critical Oracle International Corp
Publication of CN103168293A publication Critical patent/CN103168293A/en
Application granted granted Critical
Publication of CN103168293B publication Critical patent/CN103168293B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0888Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches using selective caching, e.g. bypass
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0871Allocation or management of cache space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • G06F12/123Replacement control using replacement algorithms with age lists, e.g. queue, most recently used [MRU] list or least recently used [LRU] list
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/46Caching storage objects of specific type in disk cache
    • G06F2212/461Sector or disk block
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/46Caching storage objects of specific type in disk cache
    • G06F2212/465Structured object, e.g. database record

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

A method of inserting cache blocks into a cache queue includes detecting a first cache miss for the cache queue, identifying a storage block receiving an access in response to the cache miss, calculating a first estimated cache miss cost for a first storage container that includes the storage block, calculating an insertion probability for the first storage container based on a mathematical formula of the first estimated cache miss cost, randomly selecting an insertion probability number from a uniform distribution, and inserting, in response to the insertion probability exceeding the insertion probability number, a new cache block corresponding to the storage block into the cache queue.

Description

The method and system that is used for the deletion cache blocks
Background technology
Along with the enhancing of computer process ability, the demand of technical user and application also strengthens.For many industries, this can cause moving fast of resource prioritization sequence.For example, in many relational databases were used, relative importance and the cost of nonvolatile memory capacity reduced hastily.For the system manager, transferred to care to Performance And Reliability about the care of memory capacity, because the transaction delay of memory technology has limited faster and the potential benefit of more powerful microprocessor.
In semi-conductor industry, there is similar phenomenon.The theoretical gain of following the processing power of Moore's Law and computing velocity is subject to the restriction of the non-CPU bottleneck such as memory access speed widely.Along with the researchist explores next transfer memory technology, the intermediate technology such as the caching method that improves helps to make up this gap.By across the polytype buffer memory equipment of a series of different applications exploitings, use for some, can reduce the bottleneck of access delay.
The research of buffer memory design and cache algorithm is caused the increase of the complicacy of buffer memory and buffer memory management equipment.For from the cpu cache to the disk buffering and all of database caches, caching system becomes more and more important aspect overall system performance and across the calculating spectrum of every one deck.Cache algorithm is mainly processed insertion, the deletion of data cached, and revises.The correlativity of the data that are buffered and prioritization are better than the effective operation of buffer memory.Remain in buffer memory by the data item that will use continually, and remove those that seldom can use in future, traditional cache algorithm is intended to improve cache hit rate and performance.
Summary of the invention
Generally speaking, on the one hand, the present invention relates to a kind of with the method for cache blocks from the buffer queue deletion.The method comprises: the first buffer memory that detects buffer queue by processor not in; Identify the new cache blocks of the value of described buffer queue memory storage storage piece; Calculated the not middle cost of buffer memory of the estimation of the storage container that comprises described storage block by processor; By described processor based on the buffer memory of described estimation not in the mathematical formulae of cost calculate the probability of erasure of described storage container; Select randomly the probability number from even distribution, wherein, described probability of erasure surpasses described probability number; And surpass the probability number in response to probability of erasure, remove new cache blocks from buffer queue.
Generally speaking, on the one hand, the present invention relates to a kind of computer-readable storage medium, its storage is used for from many instructions of buffer queue deletion cache blocks.These many instructions comprise following functions: the first buffer memory that detects described buffer queue not in; The new cache blocks of the value of identification buffer queue memory storage storage piece; Calculating comprise described storage block storage container estimation buffer memory in cost; Based on the buffer memory of described estimation not in the mathematical formulae of cost calculate the probability of erasure of storage container; Select randomly the probability number from even distribution, wherein probability of erasure surpasses the probability number; And surpass the probability number in response to probability of erasure, remove new cache blocks from buffer queue.
Generally speaking, on the one hand, the present invention relates to a kind of system for the deletion cache blocks.This system is included in the buffer queue that the buffer queue end has section on probation.A new cache blocks that should section on probation comprises the value of store storage piece, wherein said new cache blocks have had the cache hit of zero accumulation since in being inserted into buffer queue.This buffer queue also has the protection section adjacent with section on probation.This system further is included on processor operation and comprises the cache manager of following functions: the first buffer memory that detects buffer queue not in; New cache blocks in the identification buffer queue; Calculating comprise this storage block storage container estimation buffer memory in cost; Based on the buffer memory of estimating not in the mathematical formulae of cost calculate the probability of erasure of storage container; Select randomly the probability number from even distribution, wherein probability of erasure surpasses the probability number; And surpass the probability number in response to probability of erasure, remove this new cache blocks from buffer queue.
By following description and appended claims, other aspects of the present invention will become apparent.
Description of drawings
Figure 1A and 1B have described the schematic block diagram according to the system of one or more embodiment of the present invention.
Fig. 2,3,4A and 4B have described the process flow diagram according to one or more embodiment of the present invention.
Fig. 5 A, 5B, and 5C has described the example according to the buffer queue of one or more embodiment of the present invention.
Fig. 6 has described the computer system according to one or more embodiment of the present invention.
Embodiment
Describe specific embodiments of the invention in detail referring now to accompanying drawing.For guaranteeing consistance, the similar elements in each accompanying drawing represents by same reference numerals.
In the detailed description to various embodiments of the present invention below, numerous details have been illustrated so that more comprehensively understanding various embodiments of the present invention to be provided.Yet, it should be apparent to those skilled in the art that the present invention can be in the situation that do not have these details to implement yet.In other cases, known feature is not described in detail, to avoid unnecessarily making description complicated.
Generally speaking, various embodiments of the present invention provide the method and system that is used for the management buffer memory.Particularly, various embodiments of the present invention are distributed the not middle cost of the buffer memory of estimating to the one or more cache blocks in buffer queue.The buffer memory of estimating in cost be the estimation of the cost in not to the buffer memory of cache blocks.For new cache blocks, the not middle cost of the buffer memory of estimation is based on the storage container corresponding to the cache blocks on memory device.The not middle cost of the buffer memory of estimating is used to select a cache blocks to be used for removing from buffer queue in the mode of probability.
For purposes of illustration, caching can refer to any access to buffer memory and/or modification.The example of caching can include but are not limited to: the cache hit of read operation, write operation, write back operations, any type, the buffer memory of any type not in, and/or other cachings of any amount.In one or more embodiment of the present invention, caching can refer to cause the one or more cache blocks in buffer queue to be recovered any cache request of utilization.Recycling can refer to any reverse movement of the one or more cache blocks in buffer queue.Can refer to access to the storage block in storage container to the caching of storage container and/or access.
For purposes of illustration, buffer memory can refer in not request to buffer memory (and/or the buffer queue that is associated, if applicable) in the caching that reads or write of non-existent storage block.Therefore, in one or more embodiment of the present invention, storage block is directly read from corresponding memory device, is inserted in buffer memory subsequently.In one or more embodiment of the present invention, during buffer memory can refer to write not in not, read not, and/or need to be to certain combination with read requests of writing of the current access that is not stored in the storage block in buffer memory.
For purposes of illustration, cache hit can refer to access current buffer memory (and the buffer queue that is associated, the caching of the storage block in if applicable) of being stored in.According to various embodiments of the present invention, cache hit can comprise the modification corresponding to the buffer queue of this buffer memory." read " cache hit and can refer to read the request of the content of the storage unit in buffer memory." write " cache hit and can refer to that with the value storage unit from buffer memory is written to the request of the corresponding storage block in memory device.In one or more embodiment of the present invention, write operation can be by being written to described value storage unit and not revising storage block (for example, in write-back buffer) and carry out.Then, in certain schedule time or after Event triggered, this value can be written back to storage block.
For purposes of illustration, old cache blocks is the cache blocks that has received at least one cache hit since in being inserted into buffer queue.Since referring in being inserted into buffer queue, new cache blocks do not received the cache blocks of cache hit.
Figure 1A shows system according to an embodiment of the invention (199).As shown in Figure 1A, system (199) has a plurality of assemblies, (for example comprise buffer memory (100), memory device (110), one group of storage container, storage container 1(120), storage container Z(130)), one group of storage block (for example, storage block A(122), storage block B(124), storage block C(126), storage block D(128), storage block E(132), storage block F(134), storage block G(136), storage block H(138)), cache manager (140), buffer queue (142), and administration module (144).The assembly of system (199) (for example can be positioned on same equipment, server, large scale computer, desktop PC (PC), laptop computer, PDA(Personal Digital Assistant), phone, mobile phone, self-service terminal, cable box, and any other equipment) maybe can be positioned at by network (for example, the Internet) and utilize on the equipment of the separation that wired and/or wireless segment connects.Those skilled in the art will recognize that to have each the independent assembly more than to move on equipment, and the combination in any that these assemblies can be arranged in given embodiment of the present invention.
In one or more embodiment of the present invention, buffer memory (100) is the memory module with one or more storage unit.Each storage unit (not shown) in buffer memory (100) can be with storage block (for example, storage block A(122), the storage block B(124 that is cited), storage block C(126), storage block D(128), storage block E(132), storage block F(134), storage block G(136), storage block H(138)) one or more values be stored in memory device (110).If the value of storage unit is different from the value of the storage block that is cited, it is called " dirty ".Therefore, storage block (for example, storage block A(122), storage block B(124), storage block C(126), storage block D(128), storage block E(132), storage block F(134), storage block G(136), storage block H(138)) be called " being buffered " and/or " being stored " in buffer memory (100), the storage unit in if it is buffered (100) is quoted and/or the cache blocks of reference stores piece is stored in corresponding buffer queue.
Buffer memory (100) can comprise the buffer address space that has for one or more buffer address of each storage unit.Therefore, in one or more embodiment of the present invention, each storage unit can have the reference field of the address of buffer address, store storage piece, and/or the value field of the value of store storage piece.Buffer memory (100) can be the part of memory devices and/or one or more memory devices.In one or more embodiment of the present invention, buffer memory may be implemented as the abstract middle layer between memory device and one or more application and/or equipment (below be called " requestor ").In this way, can be used as intermediate from the value of storage device requests and be stored in buffer memory (100), and offer the requestor.By the requestor to the later access of the value in storage block can in the situation that not accessing storage device carry out.
Continue Figure 1A, buffer memory (100) can consist of volatibility and/or the nonvolatile memory of a part and/or any other form of the storer on one or more hard disk drives.An example that is stored in the buffer memory in volatile memory is the part of appointment of the random-access memory (ram) in computer system or the amount of appointment.The RAM storer of appointment can be used to store the one or more values from hard disk drive or other memory devices, in order to access faster.In one or more embodiment of the present invention, buffer memory (100) is the distributed caching that scatters across the one or more physical storage devices that connect by network.Memory devices can dynamically be revised, so that the size of buffer memory is along with the interpolation of one or more storage unit and/or deletion and increase or shrink.
In one or more embodiment of the present invention, buffer memory (100) has lower access delay (for example, read and/write delay) than the memory device of one or more correspondences.The quantity of the storage unit in buffer memory also can be less than the quantity of the storage block in memory device.Therefore, in one or more embodiment of the present invention, the storage unit in buffer memory is deleted according to one or more cache algorithm, insertion, and/or revises.Cache algorithm can comprise the synchronous and/or asynchronous step of any operation that relates to buffer memory.Synchronous operation can overlap with one or more periodic events and/or instruction (for example, related with system clock), and asynchronous operation can refer to the operation of carrying out as required and/or the operation of carrying out outside window in lock in time.
The example of buffer memory (100) can include but are not limited to: cpu cache, disk buffering, database caches, victim's buffer memory, Web buffer memory, write-back buffer, without writing buffer memory, database Buffer Pool, DRAM buffer memory, flash cache, memory buffer (for example, as Oracle
Figure BDA00003052882900061
The part of memory server product line), the operating system Buffer Pool, and/or corresponding to the target cache of middle layer buffer memory. It is the registered trademark that is positioned at the Oracle in Redwood city, California.In one example, buffer memory (100) resides on hard disk drive, and is used for storage by the virtual storage management module and has corresponding to one or more other memory devices (for example, page table of the virtual address of the physical address on RAM).In this example, storage unit is the virtual address of having stored from one or more storage blocks of actual (that is, physics) storer.
In another example, buffer memory (100) is the data structure that resides in memory device.Therefore, buffer memory (100) itself can be to be designed to based on the virtual cache of one or more cache algorithm storages from the content of physics or virtual DMA device.In another example, cpu cache be mounted in mainboard (that is, printed circuit board (PCB)) upper and by bus operation be connected to the memory devices of CPU (central processing unit) (CPU).In this example, buffer memory is to use the static RAM (SRAM) on memory chip to realize.
In another example, use Enterprise Resources Plan (ERP) system of company database to realize with three-layer architecture.Company database upward should be used for realizing from ERP at independent main frame (that is, data Layer).In order to improve database performance by reducing network traffics, light installing DB and is configured to the data of buffer memory company database on application layer host.Therefore, be buffered in to have stored on light Database application layer main frame and realize on one group of local hard drive.In this example, storage unit can be corresponding to database table, row or field.
In one or more embodiment of the present invention, memory device (110) is memory devices.The example of memory device can include but are not limited to: hard disk drive, random-access memory (ram), flash memory module, tape drive, CD-ROM driver, and/or any combination of memory device.In one or more embodiment of the present invention, memory device (110) comprises storage block (for example, storage block A(122), storage block B(124), storage block C(126), storage block D(128), storage block E(132), storage block F(134), storage block G(136), storage block H(138)).
Continue Figure 1A, in one or more embodiment of the present invention, storage block can be any logic and/or the physical segment of the storer in memory device.Each storage block can be addressable, means, it can be based on certain predefined addressing method or machine-processed and accessed.The example of storage block can include but are not limited to: bit, storer byte, memory word, register, slab, data-base recording, Database field, the HTML(Hypertext Markup Language) page, database are quoted, file, and/or any addressable section of data in memory device.According to various embodiments of the present invention, the storage block size in memory device can be (that is, consistent to all storage blocks) or variable (size that for example, depends on the content of storage block) of fixing.
In one or more embodiment of the present invention, storage block (for example, storage block A(122), storage block B(124), storage block C(126), storage block D(128), storage block E(132), storage block F(134), storage block G(136), storage block H(138)) can be grouped into storage container (for example, storage container 1(120), storage container Z(130)) in.In one or more embodiment of the present invention, storage container can refer to logic and/or the physical set of the storage block in memory device.The example of storage container can include but are not limited to: file, data-base recording, Database field, html page, database are quoted, any grouping of the one or more storage blocks in storer byte, memory word, register, slab and/or memory device.In one example, storage container is the file that resides on hard disk drive, and storage block is the storer byte on described hard disk drive.In another example, storage container is database row, and its corresponding storage block is the Database field in database row.As the example shows, storage container can be on certain hardware device one group only have and all storage block, particular table or certain database on one group only have and all storage blocks, or any other logical OR physical set.
According to various embodiments of the present invention, the size of the storage container in memory device can be (that is, consistent to all storage containers) or variable (size that for example, depends on the content of storage container) of fixing.Further, the quantity of the storage block in storage container can be that fix or variable.In one or more embodiment of the present invention, storage container is addressable.Data can based on any memory mechanism and/or algorithm, be stored in one or more storage blocks across one or more storage containers.Therefore, the storage block in storage container can be corresponding to same logical units, and/or the purposes in software program is associated according to them.Computing machine and/or equipment that the content of memory device (110) can be able to be read any type of described memory device (110) use, and can be segmented, or with any logical order storage.
In one or more embodiment of the present invention, cache manager (140) comprises the function of management buffer memory (100) and buffer queue (142).Cache manager (140) can be controlled insertion, the deletion of cache blocks in buffer queue (142), and/or revises.Cache manager (140) also can to the storage unit in buffer memory (100) carry out such as insert, deletion, and/or the operation of revising and so on and/or ask described operation to be carried out by another entity (for example, cache controller).In one or more embodiment of the present invention, cache manager (140) can be realized such as the one or more cache algorithm in method disclosed herein.The example of cache algorithm can include but are not limited to: least recently used (LRU), use (MRU) recently and/or description insertion, the deletion to buffer memory and/or buffer queue (142), and/or any combination of one or more methods of the step of revising.
Continue Figure 1A, in one or more embodiment of the present invention, cache manager (140) can be corresponding to hardware, software, or its combination.for example, the database buffer pool manager that cache manager (140) may be implemented as management DRAM and flash cache (for example, the database kernel) a part, as the Memory Management Unit that is operatively coupled to hardware cache, as the part of the memory server of diode-capacitor storage buffer memory (for example, a part as the EXADATA memory server product line of Oracle), a part as management DRAM and both ZFS instrument cache managers of flash cache (readzilla), a part as the operating system of MOS Buffer Pool, and/or will be maintained in the part of the target cache in the buffer memory of middle layer as which object of management.Foregoing assembly just wherein can be realized the example of the assembly of cache manager (140).In the situation that do not depart from scope of the present invention, can use other hardware or component software.
In one or more embodiment of the present invention, cache manager (140) is controlled the synchronous of caching and one or more periodic event (for example, system clock).Cache manager (140) also can based on one or more periodic events and/or trigger (for example, inertia writes), be controlled the periodic and/or asynchronous operation such as being written back to memory device (110).Cache manager (140) can be the intermediary between memory device (110) and the entity that sends request.The example of sending the entity of request includes but are not limited to: software program, CPU, and/or can be from memory device (110) request msg and/or to any entity of memory device (110) data writing.Therefore, cache manager (140) can receive the instruction (for example, reading and/or write command) from the entity of the request of sending, and can be from buffer memory (100), buffer queue (142) and/or memory device retrieve data and/or in them data writing.
Figure 1B shows buffer queue according to an embodiment of the invention (142).As shown in Figure 1B; system has a plurality of assemblies; (for example comprise a plurality of cache blocks; cache blocks 1(156), cache blocks i(158), cache blocks i+1(160), cache blocks j(162), cache blocks j+k(164)), protection section (152), section on probation (154), and victim's section (170).The assembly of system (for example can be positioned on same equipment, hard disk drive, RAM, memory device, Memory Management Unit (MMU), server, large scale computer, desktop PC (PC), laptop computer, PDA(Personal Digital Assistant), phone, mobile phone, self-service terminal, cable box, and any other equipment) maybe can be positioned at by network (for example, the Internet) and utilize on the equipment of the separation that wired and/or wireless segment connects.Those skilled in the art will recognize that to have each the independent assembly more than to move on equipment, any combination of these assemblies can be arranged in given embodiment of the present invention.
In one or more embodiment of the present invention, buffer queue (142) is cache blocks (for example, cache blocks 1(156), cache blocks i(158), cache blocks i+1(160), cache blocks j(162), cache blocks j+k(164)) formation.Each cache blocks in buffer queue (142) (for example, cache blocks 1(156), cache blocks i(158), cache blocks i+1(160), cache blocks j(162), cache blocks j+k(164)) one or more storage unit in can reference cache.Buffer queue (142) can be the physical arrangement of virtual architecture (for example, the data structure in storer), realization on memory device (for example, static random access memory device), and/or its any combination.
In one or more embodiment of the present invention, the position of the storage unit of the correspondence in the value reference cache of cache blocks and/or its copy.Therefore, the logic entity of the physical memory cell of the value of the cache blocks storage block that can be reference stores.Quote the memory location that can present the memory location that is arranged in storage unit, storage physical memory cell, or use the another kind of directly or indirectly form of technology that is used for the storage unit that identification is cited.According to one or more embodiment of the present invention, the storage unit that cache blocks is inserted in buffer queue with value with storage block is inserted in buffer memory is overlapped, so that the cache blocks reference storage unit.
In one or more embodiment of the present invention, in the time of in one or more cache blocks are repositioned at buffer queue (142), the storage unit of their correspondence does not move in buffer memory.Therefore, the order of cache blocks in buffer queue (142) can not reflect the order of storage unit in buffer memory.In one or more embodiment of the present invention, when selecting to be used for being inserted into the storage block of buffer memory, remove the value corresponding to different storage blocks from buffer memory.In one or more embodiment of the present invention, for dynamically adjusting the buffer memory of size, size and the buffer memory of buffer queue (142) increase pro rata.
Continue Figure 1B, in one or more embodiment of the present invention, buffer queue (142) comprises the victim's section (170) that is positioned at buffer queue (142) end.Victim's section (170) is the adjacent sets of cache blocks that consists of the subset of buffer queue (142).Cache blocks in victim's section (170) can be the candidate of removing for from buffer queue (142).In one or more embodiment of the present invention, the cache blocks in victim's section (170) is not the candidate of removing for from buffer queue (142).Therefore, in one or more embodiment of the present invention, in being inserted into buffer memory before, when being used for new cache blocks, buffer queue (142) is removed cache blocks from victim's section (170) when the insufficient space in buffer queue.
In one or more embodiment of the present invention, buffer queue (142) comprises on probation section (154) that are positioned at buffer queue (142) end.The adjacent sets of the cache blocks of the subset that section (154) on probation are formation buffer queues (150).In one or more embodiment of the present invention, section on probation (154) comprises victim's section (170), so that victim's section (170) is the subset of section on probation (154).Section on probation (154) can comprise one or more new cache blocks and/or one or more old cache blocks.In one or more embodiment of the present invention, new cache blocks is inserted into the beginning of on probation section (154) in buffer queue (142).
In one or more embodiment of the present invention, buffer queue (142) comprises and is positioned at the protection section (152) that buffer queue (142) begins to locate.Protection section (152) is the adjacent sets that consists of the cache blocks of the subset of buffer queue (142).In one or more embodiment of the present invention, protection section (152) is adjacent with section on probation (154).
Continue Figure 1B, in one or more embodiment of the present invention, cache blocks has been completed passing through buffer queue (142) when entering victim's section (170) of buffer queue (142).Therefore, cache blocks can pass whole buffer queue (142) or only pass on probation section (154) of buffer queue, so as to complete by.Particularly, to cache blocks pass through be initially located in protection section (for example, cache blocks 1(156)) begin place, perhaps section on probation (for example, a cache blocks i+1(160)) begin the place.Along with cache blocks is removed from buffer queue (142) and/or recycling buffer queue (142) in, any remaining cache piece can repeat the one or more points (for example, move on the right side in the shown figure of Figure 1B) in buffer queue.For example, if cache blocks j+k(164) be recovered use section on probation (154) begin the place (namely, to position i+1(160)) and cache blocks j+k-1(not shown) be eliminated, so, each two space that move right in the figure of Figure 1B in the remaining cache piece in section on probation (154).As another example; if cache blocks j+k(164) be recovered use protection section (152) begin the place (namely; to position 1(156)) and cache blocks j+k-1(not shown) be eliminated; so, move on the right side in each figure in Figure 1B of the remaining cache piece in buffer queue (142).When cache blocks enters victim's section (170), to buffer queue pass through complete.
In one or more embodiment of the present invention, if being recovered, cache blocks utilized N-1 time, can say this cache blocks for any positive integer N, in the N time of buffer queue (142) passed through.Therefore, buffer queue (142) be any cache blocks that never is recovered utilization by interior cache blocks for the first time, and be to be recovered to utilize the cache blocks of 2 times by interior cache blocks for the third time at buffer queue.
The container statistics
Later with reference to Figure 1A, in one or more embodiment of the present invention, cache manager (140) is stored one group of container objects of statistics (not shown).The data of the storage container in each container objects of statistics memory storage devices.In one or more embodiment of the present invention, for each storage container corresponding to the one or more cache blocks in buffer queue (142) creates the container objects of statistics.The container objects of statistics can be inserted at the first cache blocks with this storage container on the basis of buffer queue (142) and create.In one or more embodiment of the present invention, when the storage container of the correspondence of container objects of statistics does not have the remaining cache piece in buffer queue (142), deletion container objects of statistics.Therefore, when last cache blocks of storage container is removed, can delete the container objects of statistics from buffer queue (142).
In one or more embodiment of the present invention, the container objects of statistics comprises some Geju City cache blocks and several new cache blocks in the buffer queue corresponding to storage container.The quantity of the old cache blocks of storage container is the counting that the old cache blocks of the conduct in storage container is stored in the storage block in buffer queue (142).The quantity of the new cache blocks of storage container is the counting that the new cache blocks of the conduct in storage container is stored in the storage block in buffer queue (142).Refer to have the storage block of corresponding cache blocks in buffer queue (142) as the storage block of cache blocks " storage ".The cache blocks reference stores storage unit in the buffer memory (100) of the value of storage block (that is, dirty or non-dirty value).
Continue Figure 1A, in one or more embodiment of the present invention, after starting buffer memory, least-recently-used (SLRU) buffer queue of cache manager (140) and segmentation (that is, does not have probabilistic insertion and/or deletion) similarly and operates buffer queue.Therefore, in one or more embodiment of the present invention, cache manager (140) is configured to activate afterwards probabilistic insertion and/or probabilistic deletion at predefined warming up period (being defined as several preheating affairs and/or time periods).In one or more embodiment of the present invention, cache manager (140) is configured to insertion and/or probabilistic deletion of delay probability, until be buffered in the data of having collected the container objects of statistics on the affairs (T) of specified quantity.At this moment in the section process and/or at this moment after section, cache manager (140) can be collected one or more in the following container statistics of each container objects of statistics:
A. the quantity of first pass cache blocks (" num_first_pass_blocks ").In one or more embodiment of the present invention; the first pass cache blocks be completed to section on probation pass through for the first time those (namely; be inserted in section on probation begin place (that is, top) recycle subsequently section on probation or protection section begin to locate those of (that is, top)).
B. the first pass quantity (" num_first_pass_hits ") of hitting.In one or more embodiment of the present invention, this is the counting of having completed the sum of the cache hit of those cache blocks that pass through for the first time of section on probation.
C. the quantity of the second chance piece (" num_second_chance_blocks ").In one or more embodiment of the present invention, this be completed in the situation that do not receive cache hit complete to section on probation for the first time by and be recovered the quantity of the cache blocks that begins to locate that uses section on probation.
D. hit the quantity (" num_second_pass_hit_blocks ") of piece for second time.In one or more embodiment of the present invention, this be to section on probation for the second time by process in the quantity of the cache blocks that is hit.
E. receive first hit before the average (" avg_cache_accesses_before_first_hit ") of cache access.In one or more embodiment of the present invention, this be in cache blocks is inserted into buffer queue and to section on probation for the second time by in receive the average of the cache access between cache hit.
F. whether " activity " Status Flag is followed the tracks of probabilistic deletion and is activated for the storage container of correspondence.The active state mark is set to vacation (FALSE) at first.
G. the quantity of the affairs since last access (" transactions_since_last_access ").In one or more embodiment of the present invention, this keep to follow the tracks of the quantity of (that is, by buffer memory service) affairs of having carried out since to the last access (that is, cache hit) of cache blocks.If this value surpasses predefined threshold number, remove from buffer memory afterwards with the cache blocks of probability 1 deletion corresponding to this storage container considered.Can and/or receive predefined threshold number from any user who is authorized to or entity from the graphic user interface of cache manager (140).
After the affairs (T) of specified quantity were completed, cache manager (140) can continue to collect these container statistics.In one or more embodiment of the present invention, upgrade a container objects of statistics based on every T affairs of described container statistics.Therefore, cache manager (140) can realize that counter is to be updated periodically the container objects of statistics, so that every T transaction table registration is according to the collection cycle.In one or more embodiment of the present invention, upgrade the container objects of statistics after each affairs.Therefore, can calculate with the moving window of affairs the container statistics of each container objects of statistics.Cache manager (140) can receive section and/or come and the element of various embodiments of the present invention and/or any one insertion and/or the deletion of delay probability in combination in step with the affairs of specified quantity preheating time.
Continue Figure 1A, according to one or more embodiment of the present invention, cache manager (140) comprises graphic user interface (GUI) and/or application programming interface (API).GUI and/or API comprise affairs, the preheating time section that receives size, the specified quantity of moving window from user and/or software application, any attribute or the character of use cache manager (140) in.GUI can the user to software application show in software application (for example, Web application, desktop application, mobile application etc.), input and provide feedback in order to receive.Can provide self-defined, report performance statistics with GUI, and/or revise the property of system.The user of GUI can be final user, data base administrator, system manager, hardware design teacher of computer system and/or any entity or the individual who meets one or more security documents of issuing in advance.As an alternative or additionally, cache manager (140) can be preconfigured or be designed with affairs, the preheating time section of moving window, the specified quantity of preassigned size, and/or any attribute or the character used in cache manager (140).
In one or more embodiment of the present invention, cache manager (140) uses the data of collecting in the business process of specified quantity to fill and/or revise the container objects of statistics.This can be based upon the data that this T affairs are collected after every T affairs, after each affairs (based on the moving window of bygone business), and/or based on any sampling of bygone business data is carried out.In one or more embodiment of the present invention, one or more can the execution based on the data of collecting in following operation:
A. for having num_first_pass_hits〉all container objects of statistics of 1, postpone * num_first_pass_hits/num_first_pass_blocks as container, calculate the new block cost (" estimated_new_block_cost ") of the estimation of storage container and (follow formula C 0j=E[N j] * L j, wherein, E[N j] be to buffer queue for the first time by in process to the expectation number from the cache hit of the new cache blocks of storage container j, as discussed below).For such container objects of statistics, the active state mark can be set to very (TRUE), and num_first_pass_blocks and num_first_pass_hits can be set to 0.
B. those do not have num_first_pass_hits〉1 container objects of statistics continues to use the old value of estimated_new_block_cost, and continue to increase num_first_pass_blocks and num_first_pass_hits, until next container statistics is upgraded.
C. in one or more embodiment of the present invention, for having num_second_pass_hit_blocks〉all container objects of statistics of 1, as latency* (num_second_pass_hit_blocks/num_second_chance_blocks)/avg_cache_accesses_before_first_hit calculate zero of estimation hit buffer memory in cost (" estimated_0hit_miss_cost ") (follow formula C j=L jR j=L jP (A|B j)/T j, as discussed below).In one or more embodiment of the present invention, for these container objects of statistics, num_second_chance_blocks, num_second_pass_hit_blocks, and avg_cache_accesses_before_first_hit can be reset to zero.
D. in one or more embodiment of the present invention, those do not have num_second_pass_hit_blocks〉1 container objects of statistics can continue to use existing estimated_0hit_miss_cost, and/or continue to increase num_second_chance_blocks, num_second_pass_hit_blocks and/or avg_cache_accesses_before_first_hit.
In one or more embodiment of the present invention, after warming up period and/or after carrying out the affairs of specified quantity, be set to the container objects of statistics of FALSE for the active state mark, cache manager (140) begins place (that is, top) with what probability 1 will be inserted in corresponding to the new cache blocks of storage container section on probation.In addition, cache manager (140) is also with the cache blocks of probability 0.5 deletion corresponding to storage container, if they to section on probation for the first time by process in receive zero cache hit (be considered to remove this moment) from buffer memory.In one or more embodiment of the present invention, this has strengthened the recycling of such cache blocks, in order to improve the degree of accuracy of the container statistics of estimating.
The deletion of analysis buffer memory
Continue Figure 1A, according to one or more embodiment of the present invention, cache manager (140) is deleted one or more new cache blocks in the mode of probability from victim's section of buffer queue (142).Therefore, assign probability of erasure can for the one or more new cache blocks in victim's section.Probability of erasure is: if check corresponding cache blocks, and the probability that will be removed from buffer memory of this cache blocks.For example, probability of erasure can be the numeral between zero-sum one (containing).When new cache blocks is considered when removing from buffer queue, cache manager (140) can be selected the probability number randomly.In one or more embodiment of the present invention, the probability number can be selected from even distribution and/or select in the scope of potential value of the scope of the potential value of coupling probability of erasure.Subsequently, probability number and probability of erasure are compared, and judge whether removing cache blocks.Continue the example presented above, if probability of erasure more than or equal to the probability number, is removed cache blocks (corresponding storage unit is released) from buffer queue.
In one or more embodiment of the present invention, cache manager (140) is in response to requested caching, identification cache hit and/or buffer memory not in.In one or more embodiment of the present invention, cache manager (140) follow the tracks of to buffer queue (142) each time by process in the quantity of the cache hit that received by cache blocks (each cache blocks b is expressed as n b).Buffer queue (142) passed through to comprise passing through of any section (for example, section on probation and/or protection section) that the cache blocks after it to buffer queue is eliminated or recycles.In one or more embodiment of the present invention, if for the cache blocks that is being considered and is removing, n bEqual zero, delete cache blocks from buffer queue.If n b0, so, what cache blocks was recovered the protection section that uses buffer queue (142) begins place (that is, top).When being inserted into buffer queue (142), n bCan be initialized to any value.In one or more embodiment of the present invention, when cache blocks is recovered when utilizing, n bBe reset to zero.
In one or more embodiment of the present invention, when whenever needs, new cache blocks being inserted in buffer queue (140) (for example, when buffer memory occurs in not), cache manager (140) is sequentially considered the cache blocks in victim's section of buffer queue (142), removes beginning from buffer queue (142) end.
In one or more embodiment of the present invention, cache manager (140) is as r b=n b/ t bCalculate the access rate of the estimation of old cache blocks in buffer memory, wherein, t bIt is the time that has disappeared since old cache blocks b is inserted in buffer queue (142).
Continue Figure 1A, in one or more embodiment of the present invention, cache manager (140) with predefined ratio with n bNull new cache blocks recycle on probation section of buffer queue (142) begin place (that is, top).Then, cache manager (140) can for each storage container observe to section on probation for the second time by process in the part of those cache blocks of being hit.In one embodiment, predefined part can be by any user who is authorized to and/or entity setting up and/or the modification that are connected to cache manager (140).In one or more embodiment of the present invention, can dynamically adjust predefined part in the operating process of buffer memory, in order to improve the performance of buffer memory.
In one or more embodiment of the present invention, cache manager (140) calculate to section on probation for the first time by after with n b=0 new cache blocks will to section on probation for the second time by process in receive cache hit conditional probability be P (A|B j)=P (A ∩ B j)/P (B j).In this formula, B jBe belong to its to section on probation for the first time by process in do not receive the event of new piece of the storage container j of cache hit, A be this new its to section on probation for the second time by process in receive the event of cache hit.For each storage container j, this conditional probability can be estimated as and satisfy event B jAnd be recovered the part that place (that is, top) receives the cache blocks of cache hit afterwards that begins of on probation section of using buffer queue.
In one or more embodiment of the present invention, cache manager (140) is as R j=P (A|B j)/T jCalculate the access rate from the estimation of the new cache blocks of storage container j, wherein, T jBe to section on probation for the second time by process in averaging time in spending in buffer memory by the new cache blocks from storage container j before receiving cache hit.In one or more embodiment of the present invention, can use wherein R jT jAny formula of decreasing function calculate the access rate (any linearity and/or the index variant that comprise shown formula) of estimation.
Continue Figure 1A, in one or more embodiment of the present invention, cache manager (140) is as C j=L j* R jCalculate buffer memory not middle cost, the wherein L of the estimation of the storage container that has one or more new cache blocks in victim's buffer memory jIt is the delay of storage container j.Cache manager (140) can calculate the probability P of the such cache blocks of deletion j, so that for any two storage container j and k that new cache blocks is arranged in victim's section, the not middle cost of probability of erasure and relative cache is inversely proportional to relatively: P j/ P k=C k/ C jIn one or more embodiment of the present invention, the buffer memory that can use the probability of erasure of storage container wherein and its estimation not in any formula of cost retrocorrelation or the variant (any linearity and/or the index variant that comprise shown formula) of given formula.In one or more embodiment of the present invention, cache manager (140) can be isolated in the variant of this formula and/or this formula the probability of erasure (P of any storage container that has cache blocks in buffer queue j).At first, cache manager (140) can be identified in the minimum estimation among the storage container that has one or more new cache blocks in victim's section buffer memory in cost
Figure BDA00003052882900184
Then, cache manager (140) can be used as
Figure BDA00003052882900181
Calculate zoom factor.Given Draw
Figure BDA00003052882900183
Wherein, j minBe among described storage container the least cost storage container (that is, with the buffer memory of minimum estimation not in the storage container of cost) index.In one or more embodiment of the present invention, cache manager (140) uses this formula (or its variant) to calculate the probability of erasure of the new cache blocks in buffer queue (142).In one or more embodiment of the present invention, the buffer memory that can use the probability of erasure of storage container wherein and its estimation not in any formula of cost retrocorrelation or the variant (any linearity and/or the index variant that comprise shown formula) of given formula.
Continue Figure 1A, in one or more embodiment of the present invention, cache manager (140) is sequentially considered all the new cache blocks in victim's section, from the end of buffer queue, if the cache blocks b that considers to section on probation carried out twice by and have a n b=0, it is selected as wanting deleted victim so.In one or more embodiment of the present invention, pass through for the first time if section on probation is carried out it from the new cache blocks b of storage container j, have n b=0, and have less than the buffer memory of the estimation of the old cache blocks of least cost in victim's section in cost C j, it is by with probability P so jBe chosen as the victim.
In one or more embodiment of the present invention, if be selected as the victim that will be eliminated without new cache blocks after the new cache blocks of all in sequentially considering victim's section, so, cache manager (140) selects to have n from the end of formation b=0 and the buffer memory of its estimation in cost less than the buffer memory of the estimation of the old cache blocks of least cost in victim's section not in the first new cache blocks b of cost.If victim's section does not comprise any new cache blocks, so, with the buffer memory of the estimation of minimum not in the old cache blocks of cost be selected as victim's (that is, being eliminated).
In one or more embodiment of the present invention, cache manager (140) " is forgotten " cache hit that the piece by the buffer memory outside the caching in the past of predefined quantity receives.In one or more embodiment of the present invention, for one or more container objects of statistics, the cache hit that passes out of mind is buffered manager (140) deletion from consider.For example, pass out of mind in response to cache hit, the container objects of statistics can be adjusted and be inserted into buffer queue (t since cache blocks b b) in since time of disappearing, begin with the time in the cache hit of remembering the earliest.In one or more embodiment of the present invention, predefined quantity can be the quantity of the affairs used when calculating the container statistics integral multiple (T, as discussed above).
In one or more embodiment of the present invention, when considering to be used for the cache blocks b of removing, if it is accessed that the storage container of cache blocks b (that is, corresponding to the storage container of the storage block of cache blocks b) does not have in the affairs of predefine quantity, cache manager (140) is removed cache blocks b.In one or more embodiment of the present invention, predefined quantity can be the quantity of the affairs used when calculating the container statistics integral multiple (T, as discussed above).
Working load changes
Continue Figure 1A, in one or more embodiment of the present invention, cache manager (140) is configured to detection and for storage container, the working load variation has occured.In one or more embodiment of the present invention, cache manager (140) is configured to detect, in storage container j any two often accessed storage blocks the predefined time period (for example, 20 seconds) in when receiving the access (N) of predefined quantity, the working load variation has occured for storage container j.In one or more embodiment of the present invention, often accessed storage block is not the storage block with the access rate (that is, the access rate of estimation) under predefined access rate threshold value.In one or more embodiment of the present invention, according in process disclosed herein one or more (for example, the process of Fig. 2, as discussed below), cache manager (140) comprises the function of the access rate of calculate estimating.
In one or more embodiment of the present invention, cache manager (140) detects, if the access rate of the container j that the access (N) of predefined at least quantity is calculated (for example increases predefined change threshold, increase of percentage growth, multiple increase, unit interval access number etc.), for storage container j, the working load variation has occured.In one or more embodiment of the present invention, cache manager (140) is configured to receive the access (N) of predefine quantity, predefined time period, predefined access rate threshold value there from the user of the GUI of cache manager (140), and/or predefined change threshold.The user's of GUI example can include but are not limited to: the final user of computer system, data base administrator, system manager, hardware design teacher and/or meet any entity or the people of one or more security documents of issuing in advance.As an alternative or additionally, cache manager (140) can be preconfigured or be designed with the access (N) of predefine quantity, predefined time period, predefined access rate threshold value, predefined change threshold, and/or any attribute that uses in cache manager (140).
In one or more embodiment of the present invention, cache manager (140) comprises that setting is corresponding to the function of the working load transformation period attribute (" workload_change_time ") of the container objects of statistics of storage container.The workload_change_time attribute can be initialized to zero.In one or more embodiment of the present invention, cache manager (140) is configured to upgrade the workload_change_time attribute, with the time of storage when the working load variation being detected.
In one or more embodiment of the present invention, cache manager (140) is configured to select " outmoded " cache blocks as being used for from the latent sufferer of buffer queue (142) deletion.In one or more embodiment of the present invention, outmoded cache blocks is there is no any old cache blocks of access time before the workload_change_time of its corresponding storage container accessed and that it is up-to-date in the caching of predefine quantity.
Continue Figure 1A, in one or more embodiment of the present invention, administration module (144) provides interoperability, format conversion and/or cross-compatibility between the various assemblies of system (199), as shown in the exemplary form in Figure 1A.For example, administration module (144) can be between cache manager (140) and memory device (110) the transmission of data, and/or vice versa.In addition, administration module (144) also can be used as in system (199) and the seamless integrated point between any combination of the assembly outside this system.
In one or more embodiment of the present invention, the various assemblies of system (199) are optional, and/or can reside in other assemblies, perhaps also can be positioned on one or more physical equipments.In one or more embodiment of the present invention, cache manager (140) and administration module (144) reside in software application (for example, operating system nucleus) and/or Memory Management Unit.Can also there be various other layouts and combination.
Fig. 2 shows the process flow diagram according to one or more embodiment of the present invention.Can calculate with the step of process flow diagram illustrated in fig. 2 the not middle cost of buffer memory of the estimation of cache blocks.The those skilled in the art after reading this detailed description, will recognize, the order of step illustrated in fig. 2 and quantity can be different between various embodiments of the present invention.Further, the one or more steps in Fig. 2 can be optional, and/or can carry out with any combination of different orders.
In step 200, one group of old cache blocks in victim's section of identification buffer queue (for example, as discussed above, the buffer queue 150 of Figure 1B) (for example, as discussed above, victim's section 170 of Figure 1B).The old cache blocks of this group can comprise in victim's section in being inserted into buffer queue since accessed at least all cache blocks once.
In step 205, the access rate of the estimation of each in the old cache blocks of the identification in calculating victim section.In one or more embodiment of the present invention, the access rate of the estimation of the cache blocks b in this group is calculated as r b=n b/ t b, wherein, n bThe quantity of hitting that receives in by the buffer queue process current, t bIt is the time that has disappeared since old cache blocks b is inserted in buffer queue.
In step 210, the middle cost of the buffer memory of the estimation of each in the old cache blocks of the identification in calculating victim section.In one or more embodiment of the present invention, the not middle cost of the buffer memory of estimation is calculated as C b,j=L j* r b, wherein, L jThe delay (time) of the storage container j of old cache blocks, r bIt is the access rate of the estimation of cache blocks b.In one or more embodiment of the present invention, r bCan be the access rate of the estimation that calculates in step 205, or be the access rate of cache blocks b any estimation of calculating based on detection disclosed herein and/or any method of estimating access rate.
In one or more embodiment of the present invention, can be for single old cache blocks (and be not in victim's section institute's cache blocks of haveing been friends in the past), execution in step 200,205, and 210, perhaps can carry out for each Geju City cache blocks iteration in victim's section (from the end of buffer queue sequentially).Any one in these steps all also can in response to buffer memory not in, anticipate buffer memory in the time asynchronously, periodically with one or more data-gathering processes together, and/or and any caching carry out together.
Fig. 3 shows the process flow diagram according to one or more embodiment of the present invention.Can calculate with the step of process flow diagram illustrated in fig. 3 the probability of erasure of cache blocks.The those skilled in the art after reading this detailed description, will recognize, the order of step illustrated in fig. 3 and quantity can be different between various embodiments of the present invention.Further, the one or more steps in Fig. 3 can be optional, and/or can carry out with any combination of different orders.
In step 300, if new cache blocks b is receiving the zero cache hit in for the first time by section process on probation, ask from the new cache blocks of storage container j and will receive for the second time the probability (P (A|B of at least one cache hit in the process by section on probation at it subsequently j)) approximate value.In one or more embodiment of the present invention, calculate to have this probability of each storage container j of at least one new cache blocks in victim's section of buffer queue, and this probability equates for all the new cache blocks of the storage container j in victim's section.In one or more embodiment of the present invention, for storage container j, this probability is estimated as and receives subsequently the mark of the new cache blocks (having the zero cache hit) of the recycling of at least one cache hit in for the second time by section process on probation at them from storage container j.
In step 305, calculate the access rate of the estimation of storage container j.In one or more embodiment of the present invention, as R j=P (A|B j)/T jCalculate the access rate of estimation, wherein, T jIt is the averaging time that is spent in buffer memory by the new cache blocks from storage container j receive cache hit in for the second time by section process on probation before.In one or more embodiment of the present invention, P (A|B j) can be the output of step 300, or be any cache hit probability that storage container j calculates based on the method for any calculating cache hit probability.Can use the access rate of wherein estimating and any variant (any linearity and/or the index variant that comprise shown formula) that is spent in the given formula of the time retrocorrelation in buffer memory by cache blocks to calculate the access rate of estimation.
In step 310, as C j=L j* R jCome for storage container j calculates the not middle cost of the buffer memory of estimating, wherein, Lj is the delay of storage container.In one or more embodiment of the present invention, R jCan be the access rate of the estimation that calculates in step 305, or be the access rate of storage container j any estimation of calculating based on any method that detects and/or estimate access rate.Can use wherein any variant (any linearity and/or the index variant that comprise shown formula) of the access rate the estimated given formula relevant to the delay of storage container and/or memory device to calculate the middle cost of buffer memory of estimation.
In step 315, the buffer memory that is identified in the minimum estimation among all storage containers that have at least one new cache blocks in victim's section in cost
Figure BDA00003052882900221
This can by follow the tracks of to victim's section newly adds each time and keep to the buffer memory of minimum estimation not the quoting of middle cost, one or more predefined times (for example, when considering that cache blocks is used for removing, before the consideration cache blocks is used for removing/afterwards, etc.) buffer memory of the estimation of all cache blocks in iteration victim's section in cost, and/or the middle one-tenth of buffer memory of following the tracks of minimum estimation when victim's section is occured to revise carried out originally.
In step 320, as
Figure BDA00003052882900232
Calculate zoom factor.Zoom factor can be use with the buffer memory of the estimation of storage container j in cost and/or probability of erasure and the buffer memory with the minimum estimation storage container j of middle cost not minEstimation buffer memory in that estimate or any constant that calculate of the formula of cost and/or probability correlation connection.
In step 325, calculate the probability of erasure of each storage container j.In one or more embodiment of the present invention, as Calculate the probability of erasure with any storage container j of the new cache blocks in victim's section.In one or more embodiment of the present invention, can use any variant (any linearity and/or the index variant that comprise shown formula) of given formula to calculate probability of erasure, wherein, the probability of erasure of storage container j and least cost storage container (that is, with the buffer memory of minimum estimation not in the storage container with the new cache blocks in victim's section of cost) probability of erasure relevant.
Fig. 4 A and 4B show the process flow diagram according to one or more embodiment of the present invention.Can come to select cache blocks for removing from buffer queue with the step of Fig. 4 A and the shown process flow diagram of 4B.The those skilled in the art reading after this describes in detail, will recognize, Fig. 4 A and and order and the quantity of the shown step of 4B can be different between various embodiments of the present invention.Further, the one or more steps in Fig. 4 A and 4B can be optional, and/or can carry out with any combination of different orders.In addition, can be energetically or negatively carry out one or more steps.For example, determining step can be based on test one condition, receive and to show the interruption that there is this condition, be omitted, and/or carries out in any other mode.
In step 400, detect buffer memory not in.This may be because for read requests or the write request of the memory block of the non-buffer memory on memory device.In one or more embodiment of the present invention, this can cause the access to memory device.According to various embodiments of the present invention, then can the Selective storage piece for being inserted in buffer memory (that is, the copy of the value of storage block can be placed in the storage unit corresponding to storage block).In one or more embodiment of the present invention, can by the cache manager that is operably connected to buffer memory (for example, the cache manager of Fig. 1 (140), as discussed above) detect buffer memory not in.
In step 401, whether judgement exists at least one new cache blocks in victim's section of buffer queue.If there is at least one new cache blocks in victim's section, process advances to step 402.If no, process advances to the step 460 of Fig. 4 B.Whether judgement exists at least one new cache blocks may need the iteration of victim's section and/or inspection to store one or more data structures (for example, in cache manager) corresponding to counting and/or the mark of the new cache blocks in buffer queue in victim's section.
In step 402, new cache blocks b is selected for consideration.In one or more embodiment of the present invention, cache blocks b is the first new cache blocks (sequentially considering) of counting from the end of victim's section.
In step 404, judge that whether cache blocks b makes at least twice on probation section of buffer queue and pass through, and whether be zero for the quantity of current cache hit by buffer queue cache blocks b.If two conditions all are satisfied (that is, true), cache blocks b is selected for removing, and process advances to the step 462 of Fig. 4 B.If any one condition is not satisfied, process advances to step 406.In one or more embodiment of the present invention, b must have from cache manager (for example, the cache manager of Fig. 1 (140), as discussed above) and/or several passing through of completing that equal certain predefine quantity of obtaining of the programmer of other entities.
In step 406, judge cache blocks b whether on probation section of buffer queue pass through for the first time, and whether have less than the buffer memory of the minimum estimation between the old cache blocks in victim's section in cost (C OldMin) estimation buffer memory in cost (C j).If two conditions all are satisfied, process advances to step 408.If no, process advances to step 414.In each other embodiment of the present invention, step 406 may need to judge whether cache blocks b completes pass through (being not to pass through for the first time) to buffer queue of any predefine quantity.
In one or more embodiment of the present invention, when the buffer memory of the minimum estimation among the old cache blocks in search victim section not in during cost, if run into outmoded cache blocks, it is chosen as potential victim.Subsequently, one or more according in the processing of the new cache blocks of removing as described herein can consider sequentially that all new cache blocks (if any) between the end of cache blocks b and victim's section are for removing.In one or more embodiment of the present invention, if in new cache blocks, neither one is selected for removing, remove outmoded cache blocks from buffer queue.
In one or more embodiment of the present invention, can occur after cache hit owing to having cache blocks to move, select potential victim for deleting with in expecting that following buffer memory is not.In one or more embodiment of the present invention, identify asynchronously potential victim with the thread of appointment.In one or more embodiment of the present invention, after removing cache blocks, what after the cache blocks that is eliminated, all cache blocks of (that is, below) were recycled the protection section begins place's (that is, the top), if they have n b0, or the place that begins that recycles section on probation, if they have n b=0.
In step 408, calculate the probability of erasure (P of the storage container of cache blocks b j).In one or more embodiment of the present invention, the storage container of cache blocks b is corresponding to the storage container on the memory device of the buffer memory that comprises the storage block of being quoted by the storage unit in buffer memory.The cache blocks that storage unit is buffered in formation is quoted, and can comprise cleaning value (that is, the value of coupling storage block) and/or dirty value (that is the value that, is different from storage block).In one or more embodiment of the present invention, as the buffer memory of the estimation of the storage container of cache blocks b not in the decreasing function of cost, calculate the probability of erasure of cache blocks b.In one or more embodiment of the present invention, the described process of process flow diagram of describing by Fig. 3 is used to calculate the probability of erasure of cache blocks b.In one or more embodiment of the present invention, as
Figure BDA00003052882900251
Calculate probability of erasure, wherein, It is conduct The zoom factor that calculates, j ∈ V and wherein, It is the not middle cost of buffer memory of the minimum estimation between the new cache blocks in victim's section.
In step 410, select randomly the probability number from even distribution.In one or more embodiment of the present invention, equally distributed scope is identical with the scope of the probability of erasure that calculates in step 408.Can be from enough random processes of any amount the acquisition probability number, these processes produce stochastic distribution (in given tolerance).Can use any random digit generation method.For the present invention, random selection can refer to produce any method of the scope that is suitable for use in the possible result in probability analysis.In the situation that do not depart from scope of the present invention, random number generation and random number can comprise respectively that pseudo random number generates and pseudo random number as used herein.
In step 412, judgement probability of erasure (P j) whether more than or equal to the probability number.If probability of erasure is more than or equal to the probability number, process advances to the step 462 of Fig. 4 B.If no, process advances to step 414.For example, if to the possible range of two digital given units 0 to 100, if probability of erasure is 45, the probability number is 40, and process advances to the step 462 of Fig. 4 B.In one or more embodiment of the present invention, step 408,410, and 412 can use any method that probability of erasure and the random number selected are compared from even distribution.Therefore, in one or more embodiment of the present invention, if cache blocks b has higher probability of erasure, it is more likely deleted.
In step 414, judge whether still remain with any new cache blocks of not considering in victim's section.In one or more embodiment of the present invention, the new cache blocks of not considering must be (1) in position than the end of cache blocks b further from buffer queue, and (2) are at Fig. 4 A(401) new piece sequence process in do not consider the new cache blocks that is eliminated.If such cache blocks exists, process advances to and selects this cache blocks to turn back to step 402.If no, process advances to the step 456 of Fig. 4 B.
With reference now to Fig. 4 B,, the step of process flow diagram represents the continuation of the process flow diagram described by Fig. 4 A.Tie point between figure (that is, A, B, and C) has been described the continuation of described process.
In step 456, in one or more embodiment of the present invention, the first new cache blocks of selector unification group selection standard is used for removing from buffer queue.In one or more embodiment of the present invention, choice criteria is that new cache blocks must have n b=0, and the not middle cost C of the buffer memory of estimating j<C OldMin, wherein, n bThe quantity in the current cache hit that receives in by the buffer queue process, C jBe new cache blocks storage container estimation buffer memory in cost, and C OldMinIt is the not middle cost of buffer memory of the minimum estimation among the old cache blocks in victim's section.In one or more embodiment of the present invention, sequentially consider new cache blocks from the end of victim's section.If in victim's section, the neither one cache blocks satisfies the criterion of statement, it is possible not selecting new cache blocks by this step.In one or more embodiment of the present invention, identify the selected first new cache blocks (401) by the new piece sequence of Fig. 4 A.Therefore, in one or more embodiment of the present invention, may not need again the new cache blocks in iteration victim section, if iteration is carried out by such process.In the case, former process can be safeguarded quoting the first new cache blocks in the victim's section that satisfies the described criterion of this step.
In step 458, judge whether to have selected cache blocks by step 456.If so, process advances to step 462.If no, process advances to step 460.
In step 460, in one or more embodiment of the present invention, the buffer memory of selecting to have the minimum estimation among the old cache blocks in victim's section in cost (C OldMin) old cache blocks for removing.Can calculate by not any means of middle cost of estimating cache blocks the middle cost (C of buffer memory of estimation OldMin).In one or more embodiment of the present invention, the step by the described process of Fig. 2 be used to calculate the old cache blocks in victim's section estimation buffer memory in cost (comprise C OldMin).
In step 462, in one or more embodiment of the present invention, remove the selected cache blocks of victim's section from buffer queue.Therefore, the storage unit of the correspondence in buffer memory is released.In one or more embodiment of the present invention, the removing of cache blocks can trigger dirty value from storage unit and be written back to its corresponding storage block on memory device.In one or more embodiment of the present invention, new storage block is cached to d/d storage unit, corresponding new cache blocks be imported in buffer memory on probation section begin place (that is, top).For new cache blocks being input to the place that begins of section on probation, in one or more embodiment of the present invention, all cache blocks before the position of the cache blocks that is eliminated (namely, more close buffer queue begin the place) move to the end of buffer queue, in order to fill the space that the cache blocks that is eliminated stays.
In step 464, recycle the one or more cache blocks in buffer queue.Recycling can refer to the reverse movement of the cache blocks in buffer queue.In one or more embodiment of the present invention, all cache blocks (that is, the end of more close buffer queue) after the position of the cache blocks that is eliminated are recovered utilization, if they have n b0(namely, receive at least one cache hit current in by the buffer queue process), recycle the protection section begin locate, if or they have n b=0(namely receives the zero cache hit current in by the buffer queue process), recycle section on probation begin locate.
Fig. 5 A shows according to having of one or more embodiment of the present invention and protects section (520A), section on probation (530A), and the example buffer queue (599A) of victim's section (540A).Buffer queue also comprises one group of old cache blocks (500A, 502A, 504A, 508A, 512A, 516A) and one group of new cache blocks (506A, 510A, 514A).In shown example, buffer queue (599A) received buffer memory not in and must remove a cache blocks.In order to do like this, be operatively coupled to cache manager (for example, the cache manager 140 of Fig. 1, the middle cost of the buffer memory of the estimation of each Geju City cache blocks (512A, 516A) in calculating victim's section (540A) as discussed above) of buffer memory.
By at first as r b=n b/ t bCalculate the access rate (r of the estimation of each Geju City cache blocks b b) calculate the not middle cost of buffer memory of the estimation of old cache blocks, wherein, n bThe quantity of hitting that is received by old cache blocks b in by process to buffer queue current, t bIt is the time that has disappeared since old cache blocks b is inserted in buffer queue.Based on the access rate of estimating, cache manager is as C b, j=L j* r bCalculate the not middle cost of buffer memory of estimation, wherein, L jIt is the delay (time take millisecond as unit) of the storage container j of old cache blocks b.In the situation that do not depart from scope of the present invention, can use other markers.In victim's section, as r I=n I/ t I=4/1=4 calculates old cache blocks I(516A) the access rate of estimation.As C I,n=L n* r I=3*4=12 calculates old cache blocks I(516A) estimation buffer memory in cost.For old cache blocks G(512A) similarly calculating of (only other the old cache blocks in victim's section) execution.
Continue this example, there is new cache blocks (510A, 514A) in cache manager afterwards detecting in victim's section (540A), and beginning sequentially checks the new cache blocks in victim's section, from the work that falls back of the end of buffer queue.With this order, the first new cache blocks is identified as new cache blocks H(514A).At first, cache manager judges new cache blocks H(514A) whether section on probation is made at least twice and pass through.
Because not being satisfied (that is, new cache blocks H(514A), this condition do not pass through interior (N for the first time H=1)), therefore, cache manager advances to calculate new cache blocks H(514A) estimation buffer memory in cost (C m).The storage container (m) of new piece H has the access rate 5(R of estimation m=5).Cache manager by follow the tracks of they to section on probation for the first time by process in receive the zero cache hit and subsequently they for the second time by process in the ratio of " recycling " new cache blocks in receiving the storage container m of at least one cache hit, calculate this numeral in the last time interval.In this example, ratio being detected is 0.5.Then, use this ratio, as R m=P (A|B m)/T m=0.5/10=5 calculates the access rate from the estimation of the new cache blocks of storage container m, wherein, and T mBe to section on probation for the second time by process in averaging time (for example, take millisecond as unit) of being spent in buffer memory by the new cache blocks from storage container m before receiving cache hit.At last, as C m=L m* R m=2*5=10 calculate new cache blocks H storage container M estimation buffer memory in cost, wherein, L mThe delay of storage container m, 2 milliseconds.
At this moment, in one or more embodiment of the present invention, cache manager is identified as the old cache blocks G(512A of 12(with the middle cost of the buffer memory of the minimum estimation of the old cache blocks in victim's section) equal with the middle cost of buffer memory of estimation I(516A)).Then, cache manager carry out to check to judge new cache blocks H(514A) whether in the passing through for the first time of buffer queue, and have less than the buffer memory of the minimum estimation among the old cache blocks in victim's section (540A) not in cost estimation buffer memory in cost.Because two conditions all are satisfied (N H=1 and C m<12), therefore, cache manager advances to calculate new cache blocks H(514A) probability of erasure (P m).In order to accomplish this point, calculate the not middle cost of buffer memory of the estimation of all storage containers with the new cache blocks in victim's section.In order to accomplish this point, for the new cache blocks F(510A of only residue in victim's section (540A)) carry out said process, correspondingly, calculate new cache blocks F(510A by cache manager) all corresponding values with and corresponding storage container j(n F=0, N F=1, R j=2, L j=3, C j=6).Turn back to new cache blocks H(514A) probability of erasure (P m) calculating, as
Figure BDA00003052882900292
Calculate zoom factor, wherein,
Figure BDA00003052882900293
To have victim's section
Figure BDA00003052882900294
In the storage container of one or more new cache blocks among minimum estimation buffer memory in cost.Therefore, calculating zoom factor is By the use zoom factor, as P m = P z min * ( C z min / C m ) = 0.625 * ( 6 / 10 ) = 0 . 3 7 5 Calculate new cache blocks H(514A) probability of erasure (P m).
Next, cache manager is from the random number between even distribution generation zero-sum 1.Random number is 0.533.Cache manager determines not remove new cache blocks H(514A from buffer queue (599A)) because probability of erasure (P m=0.375) be not greater than or equal to random number (0.533).
Then, cache manager is by judging new cache blocks F(510A) whether section on probation is made at least twice by continuing the continuous analysis to the new cache blocks in buffer queue (599A).Because not being satisfied (that is, new cache blocks F(510A), this condition do not pass through interior (N for the first time F=1)), therefore, cache manager advances to calculate new cache blocks F(510A) probability of erasure P jFor P j = P z min * ( C z min / C j ) = 0.625 * ( 6 / 6 ) = 0.625 。Cache manager is determined, new cache blocks F(510A) probability of erasure (P j=0.625) greater than random number (0.533), and therefore remove new cache blocks F(510A from buffer queue (599A)).
Fig. 5 B shows according to having of one or more embodiment of the present invention and protects section (520B), section on probation (530B), and the example buffer queue (599B) of victim's section (540B).This buffer queue comprises one group of old cache blocks (500B, 502B, 504B, 508B, 512B, 516B) and one group of new cache blocks (518B, 506B, 514B).In the continuation of example as described above (with reference to figure 5A), Fig. 5 B has described at new cache blocks J(518B) be inserted into the state of buffer queue (599B) after the top of section on probation (530B).Those the existing cache blocks of (that is, more close buffer queue (599B) begin locate) of arriving in section on probation (530B) and before the cache blocks that is being eliminated are moved toward the end of buffer queue (599B).The cache blocks that shifts is new cache blocks D(506B) and old cache blocks E(508B).This describes to show the buffer queue (599B) carrying out before recycling operation, recycles operation and carries out after inserting new piece.
Fig. 5 C shows according to having of one or more embodiment of the present invention and protects section (520C), section on probation (530C), and the example buffer queue (599C) of victim's section (540C).Buffer queue comprises one group of old cache blocks (500C, 502C, 504C, 508C, 512C, 516C)) and one group of new cache blocks (518C, 506C, 514C).In the continuation of example as described above (with reference to figure 5A and 5B), Fig. 5 C has described carrying out the operation state of buffer queue (599C) afterwards of recycling.Inserting new cache blocks J(518C) afterwards, based in the quantity of hitting that buffer queue (599C) current received in by process, in victim's section (530C) and in the end arrive after cache blocks that is eliminated those existing cache blocks at (that is, the end of more close buffer queue (599C)) of recycling.Those cache blocks that buffer queue (599C) current received the zero cache hit in by process at them are recovered the top that uses section on probation (530C), and those cache blocks that receive at least one cache hit they current in by process are recovered the top that uses protection section (520B).Therefore, old cache blocks I(516C) at first recycled the top of protection section, because n I=4<0.Next, new cache blocks H(514C) be recovered the top that uses section on probation, because n H=0.At last, old cache blocks G(512C) be recovered the top that uses the protection section, because n G=4<0.Whenever cache blocks is recovered when utilizing, cache manager is pushed existing cache blocks to the end of buffer queue, in order to be the piece vacating space of the recycling at the place, top of section on probation (530B) or protection section (520B).
Various embodiments of the present invention can realize on the computing machine of any type almost, no matter the platform that uses how.For example, as shown in Figure 6, the storer (604) that computer system (600) comprises one or more processors (602) (such as CPU (central processing unit) (CPU), integrated circuit, hardware processor etc.), be associated (for example, random-access memory (ram), cache memory, flash memory etc.), memory device (606) (for example, hard disk, the CD-ROM driver such as CD drive or digital video disc (DVD) driver, and typical a lot of other elements of current computing machine and function (not shown) flash memory sticks etc.).Computer system (600) can also comprise the input media such as keyboard (608), mouse (610) or microphone (not shown).Further, computer system (600) can comprise the output unit such as monitor (612) (for example, liquid crystal display (LCD), plasma scope, or cathode ray tube (CRT) monitor) and so on.Computer system (600) can connect (not shown) by network interface and be connected to network (614) (for example, Local Area Network, the wide area network such as the Internet (WAN), or the network of any other type).Person of skill in the art will appreciate that, have many dissimilar computer systems, foregoing input and output device can present other forms.Generally speaking, computer system (600) comprises implements required minimum at least processing, the input of various embodiments of the present invention, and/or output unit.
In addition, in one or more embodiment of the present invention, one or more elements of foregoing computer system (600) can be positioned at remote location, and are connected to other elements by network.In addition, various embodiments of the present invention can realize having on the distributed system of a plurality of nodes, wherein, every part of the present invention (for example, cache manager (140), buffer memory (100), memory device (110) etc.) can be positioned on the different node of distributed system.In one embodiment of the invention, node is corresponding to computer system.Can be used as alternatively, node can be corresponding to the processor with the physical storage that is associated.Node can be used as alternatively corresponding to the processor of shared storage and/or resource or the micronucleus of processor.In addition, the software instruction of the form of program code of the embodied on computer readable of execution various embodiments of the present invention can be temporarily or for good and all is stored in such as CD (CD), disk, tape, on the non-instantaneous computer-readable recording medium of storer and so on, or in the memory device of any other tangible embodied on computer readable.
One or more embodiment of the present invention has one or more in following advantages.By collecting the container statistics of the storage container in memory device, can estimate more accurately the not middle cost of buffer memory based near the historical data of the storage block in storage container.
One or more embodiment of the present invention has one or more in following advantages.By delete the item of buffer memory from buffer memory in the mode of probability, can reduce the quantity of memory access, and the total cost that is associated in not of reduction and buffer memory.In addition, can provide to probabilistic deletion of cache entry the larger adaptability that working load is changed.
Following sample data shows the of the present invention one or more advantages in one or more embodiment.In following each example, analyze buffer memory replacement (ANCR) algorithm and refer to by Fig. 3,4A and the described process of 4B.Also in each example, replace (ANCR-S) algorithm with the analysis buffer memory that covers list and refer to comprise that the maintain old piece covers list and new piece covers the ANCR algorithm of the function of list.
In each example, used cache simulator to come least-recently-used (SLRU), the 2Q of comparison ANCR and ANCR-S algorithm and least recently used (LRU), segmentation, and self-adaptation is replaced buffer memory (ARC) algorithm.For the cache size of N piece, the on probation section size of SLRU and ANCR is N/2.The statistics collection window T of ANCR is set to equal N, and victim's section size K is set to equal N/100.Old size of covering list of ANCR-S algorithm is set to 25% of cache size, and the size that new piece covers list is set to 75% of cache size.
The single container example
The first example is devoted to only to use the simple scenario of a container.The purposes of this example is that demonstration ANCR algorithm does not need a plurality of heterogeneous containers to exist, in order to realize the not middle ratio of the buffer memory less than one or more existing cache replacement algorithms.
Continue the first example, the first working load in this example is made of TPC-C " New-Order " affairs of simulation.TPC-C is industry standard Transaction Processing (OLTP) benchmark, and this benchmark simulation wherein user group is carried out the complete computing environment of affairs to database.According to the TPC-C standard, be the integer of selecting randomly in scope [5,15] by the quantity of the item of each New-Order transactions access.There are 100000 in database, select the item number of each access with following process.At first, random integers A is extracted in the even distribution on [1,8191], and another integer B is extracted in the even distribution on [1,100000].Then, these integers are converted into binary format, operate to obtain the 3rd integer C by the bit of the correspondence of A and B is carried out by bit logic OR.For example, if the first bit that the first bit of A is 0, B is 1, so, the first bit of C is 1.If the second bit of A be 1 and the second bit of B be 1, so, the second bit C of C is 1.If the 3rd bit of A be 0 and the 3rd bit of B be 0, so, the 3rd bit C of C is 0, etc.Last item number equals C and adds 1 take 100000 as mould.In order to extract the details for our unessential TPC-C of example, we suppose each corresponding to a blocks of data, and therefore, we have the table with 100000 pieces of probability distribution access specified above using.
Continue the first example, the sum of the affairs of processing in the dry run process is 10N.For 8N affairs, buffer memory is preheated, and then, a last 2N affairs are regarded as the evaluation time section, within this time period, calculate buffer memory in ratio (ratio that causes the TPC-C item of accessing of buffer memory in not).Carry out enough repetitions of each dry run, so as the buffer memory of any two kinds of algorithms in difference in ratio will be on statistics significance.
For the different value of cache size N, the result of this example is presented in below table 1() in.Two versions of assessment 2Q algorithm: old formation is set to equal 0.5 2Q(0.5 of cache size) and old formation be set to equal 0.95 2Q(0.95 of cache size).Length as the old formation of the ratio of cache size is the key parameter of 2Q algorithm, affects widely the 2Q Algorithm Performance and the result in table 1 shows this parameter.
Algorithm N=5,000 N=10,000 N=20,000 N=40,000
LRU 0.581 0.407 0.227 0.085
2Q(0.5) 0.533 0.386 0.238 0.113
2Q(0.95) 0.488 0.315 0.165 0.059
SLRU 0.501 0.342 0.187 0.065
ARC 0.482 0.339 0.199 0.075
ANCR 0.453 0.306 0.164 0.057
ANCR-S 0.433 0.294 0.157 0.054
Table 1: the not middle ratio of the buffer memory of the TPC-C New-Order affairs of simulation
As shown in table 1, ANCR and ANCR-S algorithm as one man obtain the algorithm tested minimum buffer memory in ratio.
Many containers example
In the second example, the TPC-C project database is partitioned 5 equal container: 1-20000,20001-40000,40001-60000,60001-80000 and 80001-100000 of the scope of preserving following items number.Different delays is assigned to different containers, will how to affect the relative performance of the cache replacement algorithm of previous consideration to look at them.Access delay in some exemplary memory device from for the 0.1ms of flash disk to (it has service speed μ=100IOPS, arrival rate λ=84IOPS, and postpone 1/ 62.5ms of (μ-λ)=0.0625 second) for 84% the SATA disk that loads.In order to contain this delay scope, the delay of the container j in this group example is 2 5-j
Algorithm N=5000 N=10000 N=20000 N=40000
LRU 11.4 16.2 18.3 12.9
2Q(0.5) 10.5 15.4 19.3 18.3
2Q(0.95) 7.7 10.1 10.8 7.6
SLRU 9.9 13.7 15.3 10.4
ARC 9.6 13.6 16.2 12.1
ANCR 6.5 9.0 10.7 7.2
ANCR-S 6.2 8.3 8.7 4.8
Table 2: when project database is partitioned container with different delays, the TPCC New-Order affairs of simulation take 1,000,000 as the buffer memory of unit in cost.
Continue the second example, total buffer memory in cost be used as in the situation that exist different container to postpone the tolerance of assessment cache replacement algorithm.It is calculated as when on accessing storage device not in piece the time produce all buffer memorys not in the summation of delay.Above table 2() in result show when identical in the sequence of the cache replacement algorithm of considering and table 1, the difference between the middle cost of their buffer memory is much bigger, because buffer memory does not change significantly from the cost of the piece of different containers.ANCR and ANCR-S explicitly are estimated the not cost of each piece of buffer memory, and therefore, they can be towards the distribution with the piece of the container of higher delay skew buffer memory, and other algorithms can not be done so.
Continue the second example, note that in table 2 row 2 than row 1 have larger buffer memory in cost because the evaluation phase equal 2N, therefore, in evaluation phase of N=10000 occured more to many not than evaluation phase of N=5000.Finally, for N=40000, it is so large that buffer memory becomes, so that it covers nearly all piece of being accessed continually, although process more eventful business in the evaluation phase,, the actual quantity of buffer memory in not reduces widely, this why illustrated row 4 than row 3 have less buffer memory in cost.
Although be describe with reference to a limited number of embodiment of the present invention,, those skilled in the art do not depart from other embodiment of scope of the present invention as disclosed here with understanding can design yet after understanding advantage of the present invention.Correspondingly, scope of the present invention can only be limited by appended claim.

Claims (20)

1. one kind with the method for cache blocks from buffer queue deletion, comprising:
The first buffer memory that detects described buffer queue by processor not in;
Identify the new cache blocks of the value of described buffer queue memory storage storage piece;
Calculated the not middle cost of buffer memory of the estimation of the storage container that comprises described storage block by described processor;
By described processor based on the buffer memory of described estimation not in the mathematical formulae of cost calculate the probability of erasure of described storage container;
Select randomly the probability number from even distribution, wherein, described probability of erasure surpasses described probability number; And
Surpass described probability number in response to described probability of erasure, remove described new cache blocks from described buffer queue.
2. the method for claim 1 further comprises:
After removing described new cache blocks, the second buffer memory that detects described buffer queue not in;
The not middle cost of buffer memory that calculating is estimated corresponding to the non-removing of the non-removing storage container of the new cache blocks of non-removing;
Calculate the middle cost of old buffer memory of the estimation of old cache blocks, the middle cost of buffer memory that the middle cost of the old buffer memory of wherein said estimation is estimated less than described non-removing; And
Remove described old cache blocks from described buffer queue.
3. method as claimed in claim 2, wherein calculate old buffer memory that described non-removing estimates in cost comprise:
For described old cache blocks, based on since being inserted into the cache hit quantity that received since described buffer queue divided by since being inserted into the time that has disappeared since described buffer queue, calculate the access rate of estimating; And
Multiply by the access rate of described estimation based on the delay of described non-removing storage container, calculate the not middle cost of old buffer memory that described non-removing is estimated.
4. the method for claim 1, wherein calculate storage container estimation buffer memory in cost comprise:
Calculate the mark of the new cache blocks of a plurality of recyclings that receive at least one cache hit in to the process of passing through for the second time of described on probation section for described storage container, the new cache blocks of wherein said a plurality of recyclings receives the zero cache hit in to the process of passing through for the first time of described on probation section;
Calculate the access rate of estimating based on described mark; And
Long-pending based on the delay of the access rate of described estimation and described storage container, the buffer memory that calculates described estimation in cost.
5. the method for claim 1, wherein remove new cache blocks and surpass the middle cost of buffer memory of described estimation based on the middle cost of the buffer memory of the minimum estimation of a plurality of old cache blocks in described buffer queue.
6. the method for claim 1 further comprises:
In described buffer queue identification be positioned on probation section of the buffer queue end and with described on probation section adjacent protection section, wherein said new cache blocks is positioned at the position of described on probation section;
After removing described new cache blocks and in described on probation section, identification is positioned at the described position old cache blocks afterwards of described new cache blocks, and wherein said old cache blocks has the cache hit of at least one accumulation in to the current process of passing through of described on probation section;
After removing described new cache blocks and in described on probation section, identification is positioned at the described position new cache blocks of non-removing afterwards of described new cache blocks, and the new cache blocks of wherein said non-removing has the cache hit of zero accumulation;
Described old cache blocks is re-used the beginning of described protection section; And
The new cache blocks of described non-removing is re-used the beginning of described on probation section.
7. method as claimed in claim 6 further comprises:
After removing described new cache blocks, insert cache blocks to the beginning of section on probation described in described buffer queue, wherein said cache blocks comprise in response to described buffer memory not in and the value of accessed storage block.
8. the method for claim 1, wherein said mathematical formulae representative as the buffer memory of described estimation not in the probability of erasure of decreasing function of cost.
9. computer-readable storage medium, its storage is used for from many instructions of buffer queue deletion cache blocks, and described many instructions comprise following functions:
The first buffer memory that detects described buffer queue not in;
Identify the new cache blocks of the value of described buffer queue memory storage storage piece;
Calculating comprise described storage block storage container estimation buffer memory in cost;
Based on the buffer memory of described estimation not in the mathematical formulae of cost calculate the probability of erasure of described storage container;
Select randomly the probability number from even distribution, wherein, described probability of erasure surpasses described probability number; And
Surpass described probability number in response to described probability of erasure, remove described new cache blocks from described buffer queue.
10. computer-readable storage medium as claimed in claim 9, wherein said many instructions further comprise following functions:
After removing described new cache blocks, the second buffer memory that detects described buffer queue not in;
The not middle cost of buffer memory that calculating is estimated corresponding to the non-removing of the non-removing storage container of the new cache blocks of non-removing;
Calculate the middle cost of old buffer memory of the estimation of old cache blocks, the middle cost of buffer memory that the middle cost of the old buffer memory of wherein said estimation is estimated less than described non-removing; And
Remove described old cache blocks from described buffer queue.
11. computer-readable storage medium as claimed in claim 10, the middle cost of old buffer memory that wherein calculates described non-removing estimation comprises:
For described old cache blocks, based on since being inserted into the cache hit quantity that received since described buffer queue divided by since being inserted into the time that has disappeared since described buffer queue, calculate the access rate of estimating; And
Multiply by the access rate of described estimation based on the delay of described non-removing storage container, calculate the not middle cost of old buffer memory that described non-removing is estimated.
12. computer-readable storage medium as claimed in claim 9, the not middle cost of buffer memory that wherein calculates the estimation of storage container comprises:
Calculate the mark of the new cache blocks of a plurality of recyclings that receive at least one cache hit in to the process of passing through for the second time of described on probation section for described storage container, the new cache blocks of wherein said a plurality of recyclings receives the zero cache hit in to the process of passing through for the first time of described on probation section;
Calculate the access rate of estimating based on described mark; And
Long-pending based on the delay of the access rate of described estimation and described storage container, the buffer memory that calculates described estimation in cost.
13. computer-readable storage medium as claimed in claim 9 is wherein removed new cache blocks and is surpassed the middle cost of described buffer memory based on the middle cost of the buffer memory of the minimum estimation of a plurality of old cache blocks in described buffer queue.
14. computer-readable storage medium as claimed in claim 9, wherein said many instructions further comprise following functions:
In described buffer queue identification be positioned on probation section of the buffer queue end and with described on probation section adjacent protection section, wherein said new cache blocks is positioned at the position of described on probation section;
After removing described new cache blocks and in described on probation section, identification is positioned at the described position old cache blocks afterwards of described new cache blocks, and wherein said old cache blocks has the cache hit of at least one accumulation in to the current process of passing through of described on probation section;
After removing described new cache blocks and in described on probation section, identification is positioned at the described position new cache blocks of non-removing afterwards of described new cache blocks, and the new cache blocks of wherein said non-removing has the cache hit of zero accumulation;
Described old cache blocks is re-used the beginning of described protection section; And
The new cache blocks of described non-removing is re-used the beginning of described on probation section.
15. a system that is used for the deletion cache blocks comprises:
Buffer queue comprises
At on probation section of described buffer queue end, it comprises the new cache blocks of the value of store storage piece, and wherein said new cache blocks has had the cache hit of zero accumulation since in being inserted into described buffer queue,
With described on probation section adjacent protection section; And
Move and comprise the cache manager of following functions on processor:
The first buffer memory that detects described buffer queue not in;
Identify the new cache blocks in described buffer queue;
Calculating comprise described storage block storage container estimation buffer memory in cost;
Based on the buffer memory of described estimation not in the mathematical formulae of cost calculate the probability of erasure of described storage container;
Select randomly the probability number from even distribution, wherein, described probability of erasure surpasses described probability number; And
Surpass described probability number in response to described probability of erasure, remove described new cache blocks from described buffer queue.
16. system as claimed in claim 15, wherein said cache manager further is configured to:
After removing described new cache blocks, the second buffer memory that detects described buffer queue not in;
The not middle cost of buffer memory that calculating is estimated corresponding to the non-removing of the non-removing storage container of the new cache blocks of non-removing;
Calculate the middle cost of old buffer memory of the estimation of old cache blocks, the middle cost of buffer memory that the middle cost of the old buffer memory of wherein said estimation is estimated less than described non-removing; And
Remove described old cache blocks from described buffer queue.
17. system as claimed in claim 16, the middle cost of old buffer memory that wherein calculates described non-removing estimation comprises:
For described old cache blocks, based on since being inserted into the cache hit quantity that received since described buffer queue divided by since being inserted into the time that has disappeared since described buffer queue, calculate the access rate of estimating; And
Multiply by the access rate of described estimation based on the delay of described non-removing storage container, calculate the not middle cost of old buffer memory that described non-removing is estimated.
18. system as claimed in claim 15, the not middle cost of buffer memory that wherein calculates the estimation of storage container comprises:
Calculate the mark of the new cache blocks of a plurality of recyclings that receive at least one cache hit in to the process of passing through for the second time of described on probation section for described storage container, the new cache blocks of wherein said a plurality of recyclings receives the zero cache hit in to the process of passing through for the first time of described on probation section;
Calculate the access rate of estimating based on described mark; And
Long-pending based on the delay of the access rate of described estimation and described storage container, the buffer memory that calculates described estimation in cost.
19. system as claimed in claim 15, wherein said buffer queue further comprises:
Victim's section in described on probation section, wherein said new cache blocks is positioned at passing through for the first time described victim's section.
20. system as claimed in claim 19, wherein said cache manager further is configured to:
The position of the described new cache blocks of identification in described victim's section;
After removing described new cache blocks and in described victim's section, identification is positioned at the described position old cache blocks afterwards of described new cache blocks, and wherein said old cache blocks has the cache hit of at least one accumulation in to the current process of passing through of described on probation section;
After removing described new cache blocks and in described on probation section, identification is positioned at the described position new cache blocks of non-removing afterwards of described new cache blocks, and the new cache blocks of wherein said non-removing has the cache hit of zero accumulation;
Described old cache blocks is recycled the beginning of described protection section; And
The new cache blocks of described non-removing is recycled the beginning of described on probation section.
CN201180049892.9A 2010-08-31 2011-08-31 For deleting the method and system of cache blocks Active CN103168293B (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US37878010P 2010-08-31 2010-08-31
US61/378,780 2010-08-31
US13/007,539 2011-01-14
US13/007,539 US8601216B2 (en) 2010-08-31 2011-01-14 Method and system for removing cache blocks
PCT/US2011/049871 WO2012030900A1 (en) 2010-08-31 2011-08-31 Method and system for removing cache blocks

Publications (2)

Publication Number Publication Date
CN103168293A true CN103168293A (en) 2013-06-19
CN103168293B CN103168293B (en) 2016-06-22

Family

ID=44653549

Family Applications (3)

Application Number Title Priority Date Filing Date
CN201180049892.9A Active CN103168293B (en) 2010-08-31 2011-08-31 For deleting the method and system of cache blocks
CN201510301980.3A Active CN104850510B (en) 2010-08-31 2011-08-31 Method and system for being inserted into cache blocks
CN201180049886.3A Active CN103154912B (en) 2010-08-31 2011-08-31 For inserting the method and system of cache blocks

Family Applications After (2)

Application Number Title Priority Date Filing Date
CN201510301980.3A Active CN104850510B (en) 2010-08-31 2011-08-31 Method and system for being inserted into cache blocks
CN201180049886.3A Active CN103154912B (en) 2010-08-31 2011-08-31 For inserting the method and system of cache blocks

Country Status (4)

Country Link
US (2) US8601217B2 (en)
EP (3) EP2746954B1 (en)
CN (3) CN103168293B (en)
WO (2) WO2012030903A2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104598478A (en) * 2013-11-01 2015-05-06 株式会社肯博思泰格 System and method for processing virtual interview by division content
CN106201918A (en) * 2016-07-14 2016-12-07 合肥易立迅科技有限公司 A kind of method and system quickly discharged based on big data quantity and extensive caching
CN110971962A (en) * 2019-11-30 2020-04-07 咪咕视讯科技有限公司 Slice caching method and device and storage medium

Families Citing this family (79)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7707224B2 (en) 2006-11-03 2010-04-27 Google Inc. Blocking of unlicensed audio content in video files on a video hosting website
US8839275B1 (en) 2011-06-06 2014-09-16 Proximal Data, Inc. Method for intercepting input/output requests and responses
US8621157B2 (en) 2011-06-13 2013-12-31 Advanced Micro Devices, Inc. Cache prefetching from non-uniform memories
US8930330B1 (en) 2011-06-27 2015-01-06 Amazon Technologies, Inc. Validation of log formats
US8732401B2 (en) 2011-07-07 2014-05-20 Atlantis Computing, Inc. Method and apparatus for cache replacement using a catalog
WO2013114538A1 (en) * 2012-01-30 2013-08-08 富士通株式会社 Data management device, data management method, data management program, and information processing device
WO2013162614A1 (en) * 2012-04-27 2013-10-31 Hewlett-Packard Development Company, L.P. Collaborative caching
US9442859B1 (en) 2012-06-17 2016-09-13 Samsung Electronics Co., Ltd. Method for asynchronous population of data caches used with mass storage devices
US9104552B1 (en) 2012-06-23 2015-08-11 Samsung Electronics Co., Ltd. Method for the use of shadow ghost lists to prevent excessive wear on FLASH based cache devices
US8880806B2 (en) * 2012-07-27 2014-11-04 International Business Machines Corporation Randomized page weights for optimizing buffer pool page reuse
US9424202B2 (en) * 2012-11-19 2016-08-23 Smartfocus Holdings Limited Database search facility
US9317435B1 (en) 2012-12-18 2016-04-19 Netapp, Inc. System and method for an efficient cache warm-up
US9069472B2 (en) 2012-12-21 2015-06-30 Atlantis Computing, Inc. Method for dispersing and collating I/O's from virtual machines for parallelization of I/O access and redundancy of storing virtual machine data
US9277010B2 (en) 2012-12-21 2016-03-01 Atlantis Computing, Inc. Systems and apparatuses for aggregating nodes to form an aggregated virtual storage for a virtualized desktop environment
US10182128B1 (en) * 2013-02-07 2019-01-15 Amazon Technologies, Inc. Optimization of production systems
US9372865B2 (en) 2013-02-12 2016-06-21 Atlantis Computing, Inc. Deduplication metadata access in deduplication file system
US9250946B2 (en) 2013-02-12 2016-02-02 Atlantis Computing, Inc. Efficient provisioning of cloned virtual machine images using deduplication metadata
US9471590B2 (en) 2013-02-12 2016-10-18 Atlantis Computing, Inc. Method and apparatus for replicating virtual machine images using deduplication metadata
US9632944B2 (en) 2013-04-22 2017-04-25 Sap Se Enhanced transactional cache
US9477609B2 (en) 2013-04-22 2016-10-25 Sap Se Enhanced transactional cache with bulk operation
US9251003B1 (en) 2013-08-14 2016-02-02 Amazon Technologies, Inc. Database cache survivability across database failures
US9684686B1 (en) 2013-09-04 2017-06-20 Amazon Technologies, Inc. Database system recovery using non-volatile system memory
US9674087B2 (en) 2013-09-15 2017-06-06 Nicira, Inc. Performing a multi-stage lookup to classify packets
US9602398B2 (en) 2013-09-15 2017-03-21 Nicira, Inc. Dynamically generating flows with wildcard fields
US9552242B1 (en) 2013-09-25 2017-01-24 Amazon Technologies, Inc. Log-structured distributed storage using a single log sequence number space
US9465807B2 (en) 2013-10-18 2016-10-11 International Business Machines Corporation Management of file cache
US10089220B1 (en) 2013-11-01 2018-10-02 Amazon Technologies, Inc. Saving state information resulting from non-idempotent operations in non-volatile system memory
US9767015B1 (en) 2013-11-01 2017-09-19 Amazon Technologies, Inc. Enhanced operating system integrity using non-volatile system memory
US10387399B1 (en) 2013-11-01 2019-08-20 Amazon Technologies, Inc. Efficient database journaling using non-volatile system memory
US9760480B1 (en) 2013-11-01 2017-09-12 Amazon Technologies, Inc. Enhanced logging using non-volatile system memory
US9740606B1 (en) 2013-11-01 2017-08-22 Amazon Technologies, Inc. Reliable distributed messaging using non-volatile system memory
US20150127630A1 (en) * 2013-11-05 2015-05-07 Combustec Co., Ltd System and method for processing virtual interview using division content
US9996467B2 (en) * 2013-12-13 2018-06-12 Nicira, Inc. Dynamically adjusting the number of flows allowed in a flow table cache
US9569368B2 (en) 2013-12-13 2017-02-14 Nicira, Inc. Installing and managing flows in a flow table cache
US9686200B2 (en) 2014-03-31 2017-06-20 Nicira, Inc. Flow cache hierarchy
US9483179B2 (en) 2014-04-15 2016-11-01 International Business Machines Corporation Memory-area property storage including data fetch width indicator
US9513805B2 (en) * 2014-04-15 2016-12-06 International Business Machines Corporation Page table including data fetch width indicator
US9411735B2 (en) * 2014-04-15 2016-08-09 International Business Machines Corporation Counter-based wide fetch management
US10270876B2 (en) * 2014-06-02 2019-04-23 Verizon Digital Media Services Inc. Probability based caching and eviction
US10389697B1 (en) 2014-08-27 2019-08-20 Amazon Technologies, Inc. Software container activation and throttling
US11178051B2 (en) 2014-09-30 2021-11-16 Vmware, Inc. Packet key parser for flow-based forwarding elements
WO2016056217A1 (en) * 2014-10-07 2016-04-14 日本電気株式会社 Measuring apparatus, measuring system, measuring method, and program
US10187488B2 (en) * 2015-02-25 2019-01-22 Netapp, Inc. Methods for managing replacement in a distributed cache environment and devices thereof
US9866647B2 (en) * 2015-03-26 2018-01-09 Alcatel Lucent Hierarchical cost based caching for online media
US9954971B1 (en) * 2015-04-22 2018-04-24 Hazelcast, Inc. Cache eviction in a distributed computing system
CN104955075B (en) * 2015-04-27 2018-10-26 哈尔滨工程大学 A kind of delay-tolerant network cache management system and management method based on message fragment and node cooperation
CN104866602A (en) * 2015-06-01 2015-08-26 走遍世界(北京)信息技术有限公司 Queue processing method and device
US9842054B2 (en) * 2015-07-08 2017-12-12 Hon Hai Precision Industry Co., Ltd. Computing device and method for processing data in cache memory of the computing device
US11461010B2 (en) 2015-07-13 2022-10-04 Samsung Electronics Co., Ltd. Data property-based data placement in a nonvolatile memory device
US10509770B2 (en) 2015-07-13 2019-12-17 Samsung Electronics Co., Ltd. Heuristic interface for enabling a computer device to utilize data property-based data placement inside a nonvolatile memory device
US10282324B2 (en) 2015-07-13 2019-05-07 Samsung Electronics Co., Ltd. Smart I/O stream detection based on multiple attributes
GB2540761B (en) * 2015-07-23 2017-12-06 Advanced Risc Mach Ltd Cache usage estimation
CN105549905B (en) * 2015-12-09 2018-06-01 上海理工大学 A kind of method that multi-dummy machine accesses distributed objects storage system
US9760493B1 (en) * 2016-03-14 2017-09-12 Vmware, Inc. System and methods of a CPU-efficient cache replacement algorithm
US10185668B2 (en) * 2016-04-08 2019-01-22 Qualcomm Incorporated Cost-aware cache replacement
US10628320B2 (en) * 2016-06-03 2020-04-21 Synopsys, Inc. Modulization of cache structure utilizing independent tag array and data array in microprocessor
US10318302B2 (en) 2016-06-03 2019-06-11 Synopsys, Inc. Thread switching in microprocessor without full save and restore of register file
US10558463B2 (en) 2016-06-03 2020-02-11 Synopsys, Inc. Communication between threads of multi-thread processor
US10613859B2 (en) 2016-08-18 2020-04-07 Synopsys, Inc. Triple-pass execution using a retire queue having a functional unit to independently execute long latency instructions and dependent instructions
US10552158B2 (en) 2016-08-18 2020-02-04 Synopsys, Inc. Reorder buffer scoreboard having multiple valid bits to indicate a location of data
CN106909518B (en) * 2017-01-24 2020-06-26 朗坤智慧科技股份有限公司 Real-time data caching mechanism
US10394719B2 (en) * 2017-01-25 2019-08-27 Samsung Electronics Co., Ltd. Refresh aware replacement policy for volatile memory cache
WO2019047050A1 (en) * 2017-09-06 2019-03-14 南通朗恒通信技术有限公司 Method and apparatus for use in low latency communication user equipment and base station
US10613764B2 (en) 2017-11-20 2020-04-07 Advanced Micro Devices, Inc. Speculative hint-triggered activation of pages in memory
CN110297719A (en) * 2018-03-23 2019-10-01 北京京东尚科信息技术有限公司 A kind of method and system based on queue transmission data
CN109144431B (en) * 2018-09-30 2021-11-02 华中科技大学 Data block caching method, device, equipment and storage medium
CN109522501B (en) * 2018-11-26 2021-10-26 腾讯科技(深圳)有限公司 Page content management method and device
US10747594B1 (en) 2019-01-24 2020-08-18 Vmware, Inc. System and methods of zero-copy data path among user level processes
US11080189B2 (en) 2019-01-24 2021-08-03 Vmware, Inc. CPU-efficient cache replacment with two-phase eviction
US11714725B2 (en) * 2019-06-03 2023-08-01 University Of Central Florida Research Foundation, Inc. System and method for ultra-low overhead and recovery time for secure non-volatile memories
US11281594B2 (en) * 2020-02-22 2022-03-22 International Business Machines Corporation Maintaining ghost cache statistics for demoted data elements
US11550732B2 (en) * 2020-02-22 2023-01-10 International Business Machines Corporation Calculating and adjusting ghost cache size based on data access frequency
US11379380B2 (en) 2020-05-07 2022-07-05 Nxp Usa, Inc. Systems and methods for managing cache replacement
US10802762B1 (en) * 2020-06-08 2020-10-13 Open Drives LLC Systems and methods for asynchronous writing of synchronous write requests based on a dynamic write threshold
US11249660B2 (en) 2020-07-17 2022-02-15 Vmware, Inc. Low-latency shared memory channel across address spaces without system call overhead in a computing system
US11513832B2 (en) 2020-07-18 2022-11-29 Vmware, Inc. Low-latency shared memory channel across address spaces in a computing system
US11689545B2 (en) * 2021-01-16 2023-06-27 Vmware, Inc. Performing cybersecurity operations based on impact scores of computing events over a rolling time interval
CN117813594A (en) * 2021-09-14 2024-04-02 华为技术有限公司 Memory controller and method for controlling memory
CN114301851B (en) * 2022-01-20 2023-12-01 燕山大学 Industrial field-oriented time-sensitive network flow hierarchical scheduling method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6378043B1 (en) * 1998-12-31 2002-04-23 Oracle Corporation Reward based cache management
CN1869979A (en) * 2005-12-30 2006-11-29 华为技术有限公司 Buffer store management method
US20100082907A1 (en) * 2008-01-31 2010-04-01 Vinay Deolalikar System For And Method Of Data Cache Managment

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0731820B2 (en) * 1987-08-31 1995-04-10 三菱電機株式会社 Optical disk drive
US5043885A (en) * 1989-08-08 1991-08-27 International Business Machines Corporation Data cache using dynamic frequency based replacement and boundary criteria
US5381539A (en) 1992-06-04 1995-01-10 Emc Corporation System and method for dynamically controlling cache management
US5608890A (en) 1992-07-02 1997-03-04 International Business Machines Corporation Data set level cache optimization
US6385699B1 (en) * 1998-04-10 2002-05-07 International Business Machines Corporation Managing an object store based on object replacement penalties and reference probabilities
US6609177B1 (en) 1999-11-12 2003-08-19 Maxtor Corporation Method and apparatus for extending cache history
US6826599B1 (en) * 2000-06-15 2004-11-30 Cisco Technology, Inc. Method and apparatus for optimizing memory use in network caching
US6418510B1 (en) 2000-09-14 2002-07-09 International Business Machines Corporation Cooperative cache and rotational positioning optimization (RPO) scheme for a direct access storage device (DASD)
US6760812B1 (en) 2000-10-05 2004-07-06 International Business Machines Corporation System and method for coordinating state between networked caches
US6728837B2 (en) * 2001-11-02 2004-04-27 Hewlett-Packard Development Company, L.P. Adaptive data insertion for caching
US7143240B2 (en) * 2003-10-31 2006-11-28 International Business Machines Corporation System and method for providing a cost-adaptive cache
CN100521655C (en) * 2006-12-22 2009-07-29 清华大学 Dynamic sharing device of physical queue based on the stream queue
US7802057B2 (en) 2007-12-27 2010-09-21 Intel Corporation Priority aware selective cache allocation
US8250306B2 (en) * 2008-04-24 2012-08-21 International Business Machines Corporation Method for improving frequency-based caching algorithms by maintaining a stable history of evicted items

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6378043B1 (en) * 1998-12-31 2002-04-23 Oracle Corporation Reward based cache management
CN1869979A (en) * 2005-12-30 2006-11-29 华为技术有限公司 Buffer store management method
US20100082907A1 (en) * 2008-01-31 2010-04-01 Vinay Deolalikar System For And Method Of Data Cache Managment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104598478A (en) * 2013-11-01 2015-05-06 株式会社肯博思泰格 System and method for processing virtual interview by division content
CN106201918A (en) * 2016-07-14 2016-12-07 合肥易立迅科技有限公司 A kind of method and system quickly discharged based on big data quantity and extensive caching
CN106201918B (en) * 2016-07-14 2019-02-12 合肥易立迅科技有限公司 A kind of method and system based on big data quantity and extensive caching quick release
CN110971962A (en) * 2019-11-30 2020-04-07 咪咕视讯科技有限公司 Slice caching method and device and storage medium

Also Published As

Publication number Publication date
WO2012030903A2 (en) 2012-03-08
CN104850510A (en) 2015-08-19
EP2612249A1 (en) 2013-07-10
CN103168293B (en) 2016-06-22
CN103154912B (en) 2015-12-16
US20120054447A1 (en) 2012-03-01
EP2612249B1 (en) 2017-10-11
WO2012030903A3 (en) 2012-07-12
US20120054445A1 (en) 2012-03-01
WO2012030900A1 (en) 2012-03-08
CN103154912A (en) 2013-06-12
CN104850510B (en) 2018-09-04
EP2746954B1 (en) 2018-08-08
EP2612250B1 (en) 2014-08-20
US8601217B2 (en) 2013-12-03
EP2746954A2 (en) 2014-06-25
EP2612250A2 (en) 2013-07-10
EP2746954A3 (en) 2014-10-15
US8601216B2 (en) 2013-12-03

Similar Documents

Publication Publication Date Title
CN103154912B (en) For inserting the method and system of cache blocks
US6807607B1 (en) Cache memory management system and method
Li et al. C-miner: Mining block correlations in storage systems.
US8402223B2 (en) Cache eviction using memory entry value
US8112586B1 (en) Predicting and optimizing I/O performance characteristics in a multi-level caching system
US10838870B2 (en) Aggregated write and caching operations based on predicted patterns of data transfer operations
US9158707B2 (en) Statistical cache promotion
US8533398B2 (en) Combination based LRU caching
CN110362776A (en) Browser front-end data storage method, device, equipment and readable storage medium storing program for executing
Ebrahimi et al. Rc-rnn: Reconfigurable cache architecture for storage systems using recurrent neural networks
Gawanmeh et al. Enhanced Not Recently Used Algorithm for Cache Memory Systems in Mobile Computing
Zhang et al. Efficient flash-aware page-mapping cache management for on-board remote sensing image processing
Zivkov et al. Disk caching in large database and timeshared systems
KR100236983B1 (en) Method for replacement buffer based on forecasting methodology using netural networks
Foong et al. Web caching: Locality of references revisited
Fan et al. An improved method of cache prefetching for small files in Ceph system
Li et al. Algorithm-Switching-Based Last-Level Cache Structure with Hybrid Main Memory Architecture
Burleson Creating a Self-Tuning Oracle Database: Automating Oracle9i Dynamic Sga Performance
Garcia-Molina et al. Data management with massive memory: A summary
US20200034305A1 (en) Cascading pre-filter to improve caching efficiency
Franaszek et al. Victim management in a cache hierarchy
Bae et al. Clustering and Non-clustering Effects in Flash Memory Databases
CN116755858A (en) Kafka data management method, device, computer equipment and storage medium
Upadhyaya Management of Large Scale Data
Narayana et al. Analysis of Computer Architecture & Emergence of Cache Memory Miss Penalty

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant