CN106469020A - Cache element and control method and its application system - Google Patents

Cache element and control method and its application system Download PDF

Info

Publication number
CN106469020A
CN106469020A CN201510511518.6A CN201510511518A CN106469020A CN 106469020 A CN106469020 A CN 106469020A CN 201510511518 A CN201510511518 A CN 201510511518A CN 106469020 A CN106469020 A CN 106469020A
Authority
CN
China
Prior art keywords
cache
data
block
order
rank
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510511518.6A
Other languages
Chinese (zh)
Other versions
CN106469020B (en
Inventor
林业峻
李祥邦
王成渊
杨佳玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Macronix International Co Ltd
Original Assignee
Macronix International Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Macronix International Co Ltd filed Critical Macronix International Co Ltd
Priority to CN201510511518.6A priority Critical patent/CN106469020B/en
Publication of CN106469020A publication Critical patent/CN106469020A/en
Application granted granted Critical
Publication of CN106469020B publication Critical patent/CN106469020B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a kind of cache element (buffer cache device), by least one application program (application) or at least one data can be taken.Wherein, cache element includes:First rank cache, second-order cache and controller.First rank cache is used to receive and store this data.Second-order cache has the memory cell structures different from the first rank cache.Controller is used for for the data being stored in the first rank cache writing second-order cache.

Description

Cache element and control method and its application system
Technical field
Embodiments of the invention relate to a kind of cache element (buffer cache device) and control Method processed and its application system.In particular to a kind of hybrid height with multi-level cache Speed caching element and control method and its application system.
Background technology
Cache technology, is from main storage by application program (applications) by data Read and be temporarily copied in (bulk/main memory) and be arranged at than main storage more neighboring processor (Process Unit, PU) and storage medium (the rapidly-accessible storage that can be quickly accessed Media in), make processor can quickly read data from cache and need not be again from main storage Read, accelerate to read the speed with write operation, with the reaction of the system of saving and run time (response/execution time).
Known cache is usually to adopt dynamic random access memory (Dynamic Random Access Memory, DRAM) making storage medium.However, dynamic random access memory is one Plant volatile memory (volatile memory), stored data may be closed because of electric current or nothing Expected thrashing (sudden system crashes) and disappear.In order to ensure stablizing of data, one As can by the data syn-chronization being stored in cache write (synchronous write) arrive main storage In.But, this measure can lower reading and the write operation efficiency of processor.
In order to improve this problem, adopt nonvolatile memory (non-Volatile at present Memory) it is used as cache.Phase transition storage (Phase Change Memory, PCM), with Flash memory is compared, and has higher speed of operation and tolerance level (endurance), is the non-easy of most prospect One of the property lost memorizer.However, phase transition storage service life (life time) is less than dynamic randon access Memorizer, and be limited to write the demand of electric power, once at most parallel can only write limited data volume, Such as 32 bytes (bytes), easily cause write latency (write latency), are not appropriate for individually making For cache.
Therefore, a kind of more advanced cache of offer in need and control method and its application system, To improve known technology problem encountered.
Content of the invention
One of the present invention, towards being to provide a kind of cache element, can apply journey by least one Sequence obtains at least one data.This cache element includes:First rank cache, second-order are at a high speed Caching and controller.First rank cache is used to receive and store this data.Second-order is high Speed caching has the memory cell structures different from the first rank cache.Controller is used for being stored in Data write second-order cache in first rank cache.
Another of the present invention is towards being to provide a kind of control method of cache element.Wherein, high Speed caching element includes the first rank cache and has the storages different from the first rank cache The second-order cache of cellular construction, the control method of this cache element includes following suddenly: First pass through the first application program to obtain and temporarily store a data in the first rank cache.Afterwards, Again this data is write second-order cache.
Another of the present invention towards be provide a kind of embedded system (embedded system).This is embedding Embedded system includes:Main storage element, cache element and controller.Cache element Including the first rank cache receiving and storing this data by least one application program, and There is the second-order cache of the memory cell structures different from the first rank cache.Controller is used The data being stored in the first rank cache is write second-order cache;Afterwards, then will be by It is stored in the data write main storage of second-order cache.
According to above-mentioned, embodiments of the invention are constituted providing a kind of multi-level memory cache Hybrid cache element and the embedded system applying this kind of cache element.Wherein this one Hybrid cache element includes at least the first rank cache and has slow at a high speed with the first rank Deposit the second-order cache of different memory cell structures.To be obtained by least one application program At least one data be first stored in the first rank cache, and write back by hierarchy type (hierarchically write-back) mode, then the data being stored in the first rank cache is write In second-order cache.Solve known technology and be used alone dynamic random access memory as at a high speed The storage medium of caching, causes the problem that data is unstable.
In certain embodiments, and by not updating sub-block manage (sub-dirty block Management) to solve known phase transition storage cache, but write data volume deficiency because parallel With write latency problem derived from institute.In addition, minimum activation more can be adopted The data of (Least-Recently-Activated, LRA) replaces tactful (replacement policy), comes Promote the operation efficiency of embedded system.
Brief description
In order to become apparent to the above embodiment of the present invention and other objects, features and advantages, Especially exemplified by several preferred embodiments, and coordinate institute's accompanying drawings, be described in detail below:
Fig. 1 is the block schematic diagram of the embedded system according to depicted in one embodiment of the invention;
Fig. 1 ' is the block schematic diagram of embedded system depicted according to another embodiment of the present invention;
Fig. 2 is the caching operations flow process side of the embedded system according to depicted in one embodiment of the invention Block figure;
Fig. 3 is the trade-off decision stream of minimum activation strategy recently according to depicted in one embodiment of the invention Journey schematic diagram;
Fig. 4 is the schematic flow sheet of the background refresh operation according to depicted in one embodiment of the invention;
Fig. 5 is to illustrate Android intelligent mobile phone in different bufferings soon according to one embodiment of the invention Under delivery type, when carrying out the input/output reaction obtained by caching operations emulation using different application Between rectangular histogram;And
Fig. 6 is to illustrate Android intelligent mobile phone in different bufferings soon according to one embodiment of the invention Under delivery type, when being run using the application program that different application is carried out obtained by caching operations emulation Between rectangular histogram.
【Symbol description】
100:Embedded system 100 ':Embedded system
101:Main storage element 102:Cache element
102a:First rank cache 102b:Second-order cache
102c:Control unit 103:Controller
104、app1、app2、app3:Application program
105:Virtual program system or programming system
106:Driver
107A、107B、block 1、block 2:Block
107A0、107B0:Do not update position
107A1~16,107B1~16:Secondary do not update position
1A~16A, 1B~16B:Sub-block
201:Input/output is required write second-order cache.
202:Replace strategy to select written into second-order cache Zhong Weigeng new district using data Block, and selected not more new block is write in main storage element.
203:The refreshing instruction being sent according to controller, will write in second-order cache not In more new block write main storage element.
401:Monitoring is stored in quantity n, the first rank height not updating sub-block in second-order cache The cache hit rate of speed caching and the free time being stored in data in second-order cache ways.
402:When the quantity, cache hit rate and the free time thrin that do not update sub-block are higher than During preset standard, carry out background refresh operation.
403:When receiving a command request pair, stop background refresh operation immediately, first complete to refer to Order requires, and again monitors afterwards again.
501、502、503、504、503:The standardization input/output response time.
601、602、603、604、603:Standardization application program run time
I/O:Input/output requires n:Do not update the quantity of sub-block
α:Cache hit rate t:Free time
Sn、Sα、St:Preset standard App ID:Application program identification
Specific embodiment
The present invention is to provide a kind of hybrid cache element and apply this kind of cache element Embedded system and control method, can improve known to be used alone dynamic random access memory or picture Change memorizer is asked with write latency as the data caused by the storage medium of cache is unstable Topic.The multi-level memorizer that several have at least two kinds different memory cell structures cited below particularly is constituted Hybrid cache element and apply the embedded system of this kind of cache element and control Method is as preferred embodiment, and coordinates institute's accompanying drawings to elaborate.
But must be noted that these specific case study on implementation and method, be not limited to the present invention. The present invention still can be carried out using other features, element, method and parameter.Preferred embodiment Propose, be only the technical characteristic illustrating the present invention, be not limited to the claim of the present invention Scope.Has usually intellectual, by can be according to the description of description below, not in this technical field Depart from the scope of the present invention, make impartial modification and change.Different embodiments and schema it In, identical element, will be represented with identical component symbol.
Refer to Fig. 1, Fig. 1 is the embedded system 100 according to depicted in one embodiment of the invention Block schematic diagram.This embedded system 100 includes:Main storage element 101, cache unit Part 102 and controller 103.In some embodiments of the invention, main storage element 101 can To be a kind of flash memory (flash memory), but it is not limited.Other for example in the present invention are real Apply in example, main storage element 101 can be memory disk (disk), embedded multi-media card (embedded Multi-Media Card, eMMC), solid state hard disc (Solid State Disk, SSD) Or other possible storage mediums.
102, cache unit includes the first rank cache 102a and second-order cache 102b.Wherein, second-order cache 102b has and the first different depositing of rank cache 102a Storage unit structure.In some embodiments of the invention, the first rank cache 102a can be State random access memory;And second-order cache 102b can be change memorizer, but not with This is limited.For example in other embodiments of the present invention, the first rank cache 102a can be Change dynamic ram memorizer;And second-order cache 102b can be dynamic with Machine accesses memorizer.
In other words, as long as the storage list of the first rank cache 102a and second-order cache 102b Meta structure is different, in certain embodiments, the first rank cache 102a and second-order cache 102b can be respectively selected from spin transfer torque random access memory (Spin Transfer Torque Random Access Memory, STT-RAM), magnetic random access memory (Magnetoresistive Random Access Memory, MRAM), variable resistance type memorizer (Resistive Random Access Memory, ReRAM) or other possible storage mediums.
Controller 103 can be used at least one application program being provided from user's space (user space) In 104, via virtual program system (Virtual File system, VFS) or programming system 105, obtain Take at least one data, the input/output (Input/Output) of a such as application program 104 requires I/O, and this input/output requirement I/O is stored among the first rank cache 102a.And, A kind of hierarchy type write-back method is provided, the data being stored in the first rank cache 102a is write the Second order cache 102b;Then, then through driver 106, second-order will be stored in slow at a high speed Deposit in the data write main storage element 101 of 102b.
Among some embodiments of the present invention, controller 103 can be in embedded system 100 Master operating system (host machine) in processor (as depicted in Fig. 1).But it is another in the present invention Among some embodiments, controller 103 can also be built into one of cache element 102 Control unit 102c.Refer to Fig. 1 ', Fig. 1 ' is depicted according to another embodiment of the present invention embedding The block schematic diagram of embedded system 100 '.In here one embodiment, input/output requires the cache of I/O Operation to be directly controlled by by cache element 102, rather than by being arranged at embedded system 100 ' Controller 103 in master operating system is carrying out.
Refer to Fig. 2, Fig. 2 is the embedded system 100 according to depicted in one embodiment of the invention Caching operations process block diagram.In a preferred embodiment of the present invention, embedded system 100 Caching operations are to write back program by the hierarchy type that controller 103 is provided to carry out following suddenly:(1) (dirty) that do not update input/output is required I/O to write second-order by the first rank cache 102a high Speed caching 102b (as depicted in arrow 201);(2) input/output not updated is required I/O by In second order cache 102b write main storage element 101 (as depicted in arrow 202);And (3) Carry out background refreshing (background flush) and input/output requirement I/O write primary storage will not updated In device element 101 (as depicted in Fig. 2 arrow 203).
In some embodiments of the invention, carrying out before hierarchy type writes back program, also including to right Be stored in data in the first rank cache 102a and second-order cache 102b (for example input/ Output requires I/O) do not updated sub-block management.Do not update sub-block write management and comprise following portions Suddenly:First by the memory region area in the first rank cache 102a and second-order cache 102b It is divided into multiple sub-block, make each sub-block comprise a part and be stored in the first rank cache 102a With the data in second-order cache 102b.Then, recognize and mark and be stored in each sub-district Whether a part of data in block is not update.
For example, taking the first rank cache 102a as a example, the first rank cache 102a has block Each block (such as block 107A or 107A) can be divided into 16 sub-districts by 107A and 107B Block 1A~16A and 1B~16B.Wherein, the granularity of each sub-block 1A~16A and 1B~16B Size (granularity), substantially equal to can the parallel maximum data writing second-order cache 102b Amount.In the present embodiment, size essence of each sub-block 1A~16A and 1B~16B etc. In 32 bytes, also get final product the data volume of parallel recording phase change memory.And each block 107A It is 512 bytes with 107B.
In addition, each block 107A (or 107B) of the first rank cache 102a also includes one Block indicates position (dirty bit) 107A0 (or 107B0), multiple sub-block indicates position (sub-dirty bits) 107A1~16 (or 107B1~16) and one are used for identifying and are stored in block 107A (or 107B) Input/output require I/O application program identification App ID.Wherein, each sub-block mark Show that 107A1~16 (or 107B1~16) are corresponding sub-block 1A~16 (or 1B~16), in order to mark Show that input/output stored in these sub-block 1A~16A (or 1B~16B) requires the part of I/O to be No for non-updated section, and storage input/output is required the sub-block of I/O non-updated section to be denoted as Do not update sub-block (sub-dirty block).It is then to indicate that block indicates position 107A0 and 107B0 Whether have in its corresponding block 107A or 107B and do not update sub-block (dirty block).And To have and not update sub-block person and be denoted as not more new block.
For example, in the present embodiment, sub-block indicate position 107A1~16 and 107B1~16 respectively by Become by 16 bytes respectively, each sub-block indicates position 107A1~16 and 107B1~16 are right respectively Answer sub-block 1A~16 and 1B~16B.Store the input/output not updated and require I/O part Sub-block 3B by sub-block indicate position 107B3 be denoted as not updating sub-block (to be illustrated in sub-block Hachure on 3B represents).Block indicates position 107A0 will not be had and not update sub-block then block 107A is denoted as updating (clean is represented) with C;Block indicates position 107B0 will be had and not update The block 107B of sub-block B3 is denoted as not updating (representing with D).
Then, the input/output not updated is required I/O by the first rank cache 102a write the Second order cache 102b (as depicted in arrow 201).Due to being stored in the first rank cache 102a In input/output require I/O, be only stored in the part not updating in sub-block 3B and do not update. Therefore it may only be necessary to the input/output being stored in sub-block 3B not updated is required I/O partial write Enter in second-order cache 102b, you can by script be stored in non-volatile cache (dynamically with Machine access memorizer) in input/output require I/O be transferred to non-volatile cache (phase change memory Device) in.
Add, do not update the size of sub-block 3B, substantially equal to can parallel write second-order The maximum amount of data of cache 102b (phase transition storage).To be stored in not in more new block 107B Input/output require I/O non-updated section write second-order cache 102b, can't make Become the problem of write latency.Before reaction and the run time of cache element 102 can or else be affected Put, reach and take into account the stable purpose of caching data.
When there is multiple not more new block in the first rank cache 102a, can be according to embedded system The different demands of system 100, replace strategy using different data, for example recently minimum activation strategy, Clock method (CLOCK) strategy, First Come First Served (First-Come First-Served, FCFS) strategy or Least recently used (Least-Recently-Used, LRU) strategy, to determine not more new block 107B It is written into the order of second-order cache 102b.In some embodiments of the invention, will not After more new block 107B write second-order cache 102b, further that the first rank is slow at a high speed The not more new block deposited in 102a is soared (evict), to allow the input/output of other application programs I/O is required to be stored in this block.
Among the present embodiment, it is to determine that writing second-order delays at a high speed using minimum activation strategy recently Deposit the order of the not more new block in 102b.Wherein, so-called minimum recently activation strategy is to select The closely minimum input/output that do not update being set to prospect (foreground) program requires I/O, and it is excellent First write among second-order cache 102b, and will be stored this and do not update input/output requirement I/O Not more new block soar from the first rank cache 102a.Wherein, so-called foreground program, then Refer to occur at present the device of application embedded system 100, such as intelligent mobile phone, display Program on picture.
For example refer to Fig. 3, Fig. 3 is minimum activation recently according to depicted in one embodiment of the invention The trade-off decision schematic flow sheet of strategy.Embedded for the sake of briefly describing it is assumed that in the present embodiment In first rank cache 102a of formula system 100, only there are 2 block block 1 and block 2, It is used for the input that storage respectively comes from three kinds of application programs app1, app2 and app3 (with different shadings) / output requires I/O.Before this three kinds of application programs app1, app2 and app3 are set to each time During platform program, controller 103 all will store the block of these application programs according to the sequencing being accessed Arranged.First place in sequence is that storage is at most activated (Most-Recently recently Activated, MRA) block, and whipper-in be store the minimum recently area being activated (LRA) Block, is also preferentially to be write second-order cache, and by from the first rank cache 102a In the block (among the present embodiment be block Block1) that soars.
Then referring again to Fig. 2, the caching operations of embedded system 100 also include to be stored in (input/output being for example stored in requires I/O to not updating the data in second order cache 102b block The part not updated) write in main storage element 101, and second-order cache 102b of soaring and store up Deposit the block that this does not update the data.In some embodiments of the invention, second-order will be stored at a high speed The input/output that do not update in caching 102b requires I/O to write the mode 1 of main storage element 101 Comprise two kinds of modes:A kind of is to replace strategy using front described data, for example, for example minimum recently Activation strategy, clock method strategy, First Come First Served strategy or least recently used summary, will not update In block 107B write second-order cache 102b, and selected not more new block 107B of soaring (as depicted in arrow 202).Another kind is then by background and refreshes (background flush), according to control The refreshing instruction that device 103 processed is sent, by Wei Geng new district all of in second-order cache 102b In block 107B write main storage element 101 more all of in second-order cache 102b of soaring Not more new block 107B (as depicted in arrow 203).By writing of being carried out using data replacement strategy Enter disclosed with operational approach of soaring as before, therefore not here repeat.
Refer to Fig. 4, Fig. 4 is the background refresh operation according to depicted in one embodiment of the invention Schematic flow sheet.During caching operations, controller 103 can monitor and be stored in second-order cache Quantity n, the first rank cache of sub-block (for example not updating sub-block 3B) is not updated in 102b Cache hit rate (hit rate) α of the 101a and free time (idle being stored in second-order cache Time) t (as depicted in step 401).When quantity n, cache hit rate α and the sky that do not update sub-block Between idle, t thrin is higher than preset standard (n > Sn, α > SαOr t > St) when, controller 103 just meeting Carry out background refresh operation, by all not more new block 107B positioned at second-order cache 102b Write to main storage element 101, soar afterwards in second-order cache 102b all of not More new block 107B (as depicted in step 402).
Due to, when be stored in second-order cache 102b do not update sub-block quantity n, Cache hit rate α of single order cache 101a or the free time t of second-order cache 102b During higher than preset standard, represent second-order cache 102b and be in more idle state, and store Data in second-order cache 102 is more seldom employed program and is accessed.Using this neutral gear, More seldom it is employed the data write main storage element 101 that program is accessed, and vacateed second-order Storage area in cache 102b, would not cause the work load of cache element 102.
And it should be noted that in carrying out background refreshing, refer to when controller 103 receives another kind Order requires (demand request) to enter line access to the data being stored in second-order cache 102b When.Controller 103 can stop background refurbishing procedure immediately, after first completing this command request, then weighs New slow at a high speed to being stored in quantity n not updating sub-block in second-order cache 102b, the first rank Deposit cache hit rate α of 101a and be stored in second-order cache 102 block 107A and 107B The free time t of middle data is monitored (as depicted in step 403).
Afterwards, the hybrid high speed of inclusion that compared the embodiment of the present invention is provided by analogy method is delayed Deposit the efficiency of element 102 and known cache storage element.In one embodiment of this invention, using The Android intelligent mobile phone known to be simulated comparing as platform, under this analogy method includes State portion rapid:First, before collection Android intelligent mobile phone not carrying out cache storage, including program Identifier (process ID), inode program code (inode number), read/write/refreshing (read/write/fsync/flush), I/O Address (I/O address), size of data (size), the time Stamp (timestamp) ... the access parameter (access trace) waiting.Again these access parameters are put into tracking to drive Dynamic buffering cache emulator (trace-driven buffer cache simulator), emulates different cache Element collocation different buffering cache model, to obtain the access parameter of emulation caching operations.To simulate again The access parameter producing is intelligent as input/output load (I/O workloads I) input Android In mobile phone, to compare Android intelligent mobile phone for different application using different buffering caches Model carries out efficiency during caching operations.
As depicted in Fig. 5 and Fig. 6, Fig. 5 is to be illustrated according to one embodiment of the invention to analog result Android intelligent mobile phone, under different buffering cache models, carries out cache using different application Input/output response time rectangular histogram obtained by operational simulation.Fig. 5 comprises 5 groups of strip pillars respectively Collection (subsets), represent respectively Android intelligent mobile phone use application program Browser, Facebook, Gmail and Fliboard emulated with different buffering cache models after obtained analog result and Its meansigma methods Average.And each strip column combination comprise 5 strip posts 501,502,503, 504 and 505, represent respectively and individually adopt dynamic random access memory as the storage of cache Buffering cache model (representing using DRAM) of medium, individually adopt phase transition storage as cache Buffering cache model (being represented with PCM) of storage medium, individually adopt that this case embodiment provides mixed Box-like cache element 102 as the storage medium of cache buffering cache model (with Hybrid represents), using hybrid cache element 102 as cache, and arrange in pairs or groups not more New sub-block writes buffering cache model (representing with Hybrid+sub) of management and using originally hybrid As cache, collocation does not update sub-block write management and refresh operation to cache element 102 Buffering cache model (being represented with Hybrid+sub+BG), be simulated caching operations after institute The standardization input/output response time obtaining.
Among the present embodiment, the result of simulation is individually to adopt the slow of dynamic random access memory Rushing the input/output response time obtained by cache model (DRAM) is emulated is standardized.Root According to the analog result depicted in Fig. 5 it is found that compared to individually adopting dynamic random access memory Buffering cache model (DRAM), individually adopt the buffering cache of hybrid cache element 102 Model (Hybrid) can make standardization input/output response time meansigma methodss reduce 7%;Using mixing Formula cache element 102 arranging in pairs or groups does not update the buffering cache model of sub-block write management (Hybrid+Sub) standardization input/output response time meansigma methodss can be made to reduce 13%;Mixed using this Box-like cache element 102 arranging in pairs or groups does not update sub-block write management and the buffering of refresh operation is fast Delivery type (Hybrid+Sub+BG) then can make standardization input/output response time meansigma methodss reduce 20%.Display, is used the hybrid cache element 102 that this case embodiment provides slow as high speed The storage medium deposited, the input/output being greatly decreased during Android intelligent mobile phone caching operations is anti- Between seasonable.
Fig. 6 is to illustrate Android intelligent mobile phone in different bufferings soon according to one embodiment of the invention Under delivery type, the run time rectangular histogram obtained by caching operations emulation is carried out using different application. Fig. 6 comprises 5 groups of strip column combinations respectively, represents Android intelligent mobile phone respectively using application journey Sequence Browser, Facebook, Gmail and Filpboard, are imitated with different buffering cache models Obtained analog result and its meansigma methods Average after true caching operations.And each strip post Combination comprises 5 strip posts 601,602,603,604 and 605, represents respectively individually using dynamic State random access memory is as buffering cache model (DRAM), single of the storage medium of cache Solely adopt phase transition storage as buffering cache model (PCM), individually of the storage medium of cache The hybrid cache element 102 being provided using this case embodiment is as the storage medium of cache Buffering cache model (Hybrid), using hybrid cache element 102 as cache, And arrange in pairs or groups and do not update buffering cache model (Hybrid+Sub) of sub-block write management and mixed using this Box-like cache element 102 is as cache, and arranges in pairs or groups and do not update sub-block write management and brush Buffering cache model (Hybrid+Sub+BG) of new operation, gained after being simulated caching operations The standardization application program run time arriving.
Among the present embodiment, the result of simulation is individually to adopt the slow of dynamic random access memory The application program run time time obtained by cache model (DRAM) is emulated of rushing is standardized. Result depicted in Fig. 6 is it is found that adopt dynamic random access memory as at a high speed with independent Buffering cache model (DRAM) of the storage medium of caching is compared, using hybrid cache element 102 as cache, and arranges in pairs or groups and do not update the buffering of sub-block write management and background refresh operation Cache model (Hybrid+Sub+BG) can make normalized running time meansigma methodss reduce 12.5%.With Individually adopt phase change memory component as the buffering cache model of the storage medium of cache (DRAM) compare, using hybrid cache element 102 as cache, and arrange in pairs or groups not more Buffering cache model (Hybrid+Sub) of new sub-block write management can make normalized running time put down Average reduces 12.3%.Display is made using the hybrid cache element 102 that this case embodiment provides For the storage medium of cache, the application program operation of Android intelligent mobile phone can be greatly reduced Time.
According to above-mentioned, embodiments of the invention are constituted providing a kind of multi-level memory cache Hybrid cache element and the embedded system applying this kind of cache element.Wherein this one Hybrid cache element includes at least the first rank cache and has slow at a high speed with the first rank Deposit the second-order cache of different memory cell structures.To be obtained by least one application program At least one data be first stored in the first rank cache, and mode is write back by hierarchy type, then The data being stored in the first rank cache is write in second-order cache.Solve known technology It is used alone dynamic random access memory as the storage medium of cache, cause data unstable Problem.
In some preferred embodiment, can be using dynamic random access memory and phase transition storage difference As the first rank cache and second-order cache.And carrying out before hierarchy type writes back, first First rank cache is not updated with sub-block write management, and to the in hierarchy type writes back Second order cache carries out background refreshing, uses and individually adopts phase transition storage known to solving as height The storage medium of speed caching, but because the not enough institute of parallel write data volume is derivative and write latency problem. In addition, the data of minimum activation more can be adopted to replace strategy, to promote the operation efficiency of embedded system.
Although the present invention is disclosed above with preferred embodiment, so it is not limited to the present invention, appoints Has usually intellectual, without departing from the spirit and scope of the present invention, when can in what this technical field Make a little change and retouching, therefore protection scope of the present invention is when regarding appended claims scope institute That defines is defined.

Claims (20)

1. a kind of cache element (buffer cache device), can pass through an application program (application) obtain one first data, wherein this cache element includes:
One first rank cache, for receiving and storing this first data;
One second-order cache, has memory cell structures different from this first rank cache; And
One controller, for by this first data being stored in this first rank cache write this Second order cache.
2. cache element according to claim 1, wherein this first rank cache are One dynamic random access memory (Dynamic Random Access Memory, DRAM), this Second order cache is a phase transition storage (Phase Change Memory, PCM).
3. cache element according to claim 1, wherein this first rank cache bag Include multiple blocks (blocks), these blocks each include:
Multiple sub-block (sub-blocks), these sub-block each are used for storing this first data a part of;
Do not update position (sub-dirty bits) multiple times, corresponding these sub-block each, in order to indicate correspondence These sub-block in whether store in this first data at least one do not update (dirty) part, and will Be there is this non-updated section person it is denoted as one and do not updated sub-block;And
One does not update position (dirty bit), whether has this and does not update sub-block in order to indicate in this block.
4. cache element according to claim 3, these sub-block of each of which have One size (granularity), equal to can parallel write this second-order cache a maximum data Amount.
5. cache element according to claim 3, wherein this controller can monitor storage In this second-order cache do not update sub-block a quantity, the one of this first rank cache Cache hit rate (hit rate) or the free time (idle time) being stored in this second-order cache; When this quantity, this cache hit rate and this free time thrin are higher than a preset standard, will store up All this being stored in this second-order cache does not update sub-block and writes a main storage element.
6. cache element according to claim 1, wherein this first rank cache can It is used for receiving and storing one second data, this controller adopts clock method (CLOCK) strategy, Closely minimum use (Least-Recently-Used, LRU) strategy, a First Come First Served (First-Come, First-Served, FCFS) strategy and minimum activation (Least-Recently-Activated, LRA) recently The one of strategy, to select this first data or this second number being stored in this first rank cache According to write second-order cache, then soar (evict) this first data selected or this second data, To allow one the 3rd data storage in this first rank cache.
7. cache element according to claim 6, wherein this minimum activation recently (Least-Recently-Activated, LRA) strategy be select minimum recently by a prospect application program This first data or this second data that (foreground application) is accessed.
8. cache element according to claim 6, this controller can adopt this clock method Strategy, this least recently used strategy, this First Come First Served strategy and this recently minimum activation strategy it One, to select this first data or the write of this second data being stored in this second-order cache In one main storage element, then this second-order cache of soaring this first data selected or this Two data.
9. a kind of control method of cache element, wherein this cache element includes one first Rank cache and a second-order cache, wherein this second-order cache have with this first The different memory cell structure of rank cache, the control method of this cache element includes:
One first data is obtained by one first application program and is stored in this first rank cache;With And
This first data being stored in this first rank cache is write this second-order cache.
10. the control method of cache element according to claim 9, wherein this first rank Cache is a dynamic random access memory, and this second-order cache is a phase transition storage.
The control method of 11. cache element according to claim 9, further includes:
This first rank cache is partitioned into multiple blocks, so that these blocks each is included:
Multiple sub-block, these sub-block each are used for storing this first data a part of;
Do not update position multiple times, corresponding these sub-block each, in order to indicate these sub-block corresponding In whether store at least one non-updated section in this first data, and will be had this non-updated section Person is denoted as one and does not update sub-block;And
One does not update position, whether has this and does not update sub-block in order to indicate in this block.
The control method of 12. cache element according to claim 11, wherein by this The step that one data writes this second-order cache, replicates and stores including this is not updated sub-block In this second-order cache.
The control method of 13. cache element according to claim 11, each of which this A little block has a size, equal to can parallel write this second-order cache a maximum number According to amount.
The control method of 14. cache element according to claim 11, further includes:
Monitoring is stored in a quantity, this first rank height not updating sub-block in this second-order cache The fast cache hit rate caching and be stored in this first data in this second-order cache one Free time;And
When this quantity, this cache hit rate and this free time thrin are higher than a preset standard, Carry out a background refreshing (background flush) operation, will be stored in this second-order cache All this does not update sub-block and writes a main storage element;And
There is in this second-order cache of soaring this block that this does not update sub-block.
The control method of 15. cache element according to claim 14, further includes:
When receiving a command request (demand request), that is, stop this background refresh operation;
And complete this command request;And
Monitor this quantity, this cache hit rate and this free time.
The control method of 16. cache element according to claim 9, further includes:
One second data is obtained by one second application program and is stored in one first rank cache;
Using a clock method strategy, a least recently used strategy, a First Come First Served strategy or The one of closely minimum activation strategy, to select this first data being stored in this first rank cache Or this second data, it is written into this second-order cache;
This second data being stored in this first rank cache of soaring this first data selected Or this second data;And
One the 3rd data is obtained by one the 3rd application program and is stored in this first rank cache.
The control method of 17. cache element according to claim 16, wherein this is nearest Minimum activation strategy be select minimum recently by a prospect application program access this first data or should Second data.
The control method of 18. cache element according to claim 16, further includes:
Using a clock method strategy, a least recently used strategy, a First Come First Served strategy or The one of closely minimum activation strategy, to select this first data being stored in this second-order cache Or this second data, it is written into this main storage element;And
This second data being stored in this second-order cache of soaring this first data selected Or this second data.
A kind of 19. embedded systems (embedded system), including:
One main storage element;
One cache element, including:
One first rank cache, for receiving and storing at a few data by least one application program; And
One second-order cache, has memory cell structures different from this first rank cache; And
One controller, for writing this second-order by this data being stored in this first rank cache Cache;Afterwards, then by this first data being stored in this second-order cache write it is somebody's turn to do Main storage.
20. embedded systems according to claim 19, wherein this controller are built into this Among cache element.
CN201510511518.6A 2015-08-19 2015-08-19 Cache element and control method and its application system Active CN106469020B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510511518.6A CN106469020B (en) 2015-08-19 2015-08-19 Cache element and control method and its application system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510511518.6A CN106469020B (en) 2015-08-19 2015-08-19 Cache element and control method and its application system

Publications (2)

Publication Number Publication Date
CN106469020A true CN106469020A (en) 2017-03-01
CN106469020B CN106469020B (en) 2019-08-09

Family

ID=58214916

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510511518.6A Active CN106469020B (en) 2015-08-19 2015-08-19 Cache element and control method and its application system

Country Status (1)

Country Link
CN (1) CN106469020B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113342260A (en) * 2020-03-02 2021-09-03 慧荣科技股份有限公司 Server and control method applied to same

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040103251A1 (en) * 2002-11-26 2004-05-27 Mitchell Alsup Microprocessor including a first level cache and a second level cache having different cache line sizes
CN101707881A (en) * 2007-05-29 2010-05-12 先进微装置公司 Caching of microcode emulation memory
CN101989183A (en) * 2010-10-15 2011-03-23 浙江大学 Method for realizing energy-saving storing of hybrid main storage
US20120166715A1 (en) * 2009-08-11 2012-06-28 Texas Memory Systems, Inc. Secure Flash-based Memory System with Fast Wipe Feature
CN103207799A (en) * 2013-04-23 2013-07-17 中国科学院微电子研究所 Computer system shutdown method, computer system startup method, computer system shutdown device and computer system startup device
CN103593324A (en) * 2013-11-12 2014-02-19 上海新储集成电路有限公司 Quick-start and low-power-consumption computer system-on-chip with self-learning function
CN103907096A (en) * 2011-11-01 2014-07-02 国际商业机器公司 Promotion of partial data segments in flash cache
CN104346290A (en) * 2013-08-08 2015-02-11 三星电子株式会社 Storage device, computer system and methods of operating same

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040103251A1 (en) * 2002-11-26 2004-05-27 Mitchell Alsup Microprocessor including a first level cache and a second level cache having different cache line sizes
CN101707881A (en) * 2007-05-29 2010-05-12 先进微装置公司 Caching of microcode emulation memory
US20120166715A1 (en) * 2009-08-11 2012-06-28 Texas Memory Systems, Inc. Secure Flash-based Memory System with Fast Wipe Feature
CN101989183A (en) * 2010-10-15 2011-03-23 浙江大学 Method for realizing energy-saving storing of hybrid main storage
CN103907096A (en) * 2011-11-01 2014-07-02 国际商业机器公司 Promotion of partial data segments in flash cache
CN103207799A (en) * 2013-04-23 2013-07-17 中国科学院微电子研究所 Computer system shutdown method, computer system startup method, computer system shutdown device and computer system startup device
CN104346290A (en) * 2013-08-08 2015-02-11 三星电子株式会社 Storage device, computer system and methods of operating same
CN103593324A (en) * 2013-11-12 2014-02-19 上海新储集成电路有限公司 Quick-start and low-power-consumption computer system-on-chip with self-learning function

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113342260A (en) * 2020-03-02 2021-09-03 慧荣科技股份有限公司 Server and control method applied to same
CN113342260B (en) * 2020-03-02 2024-07-09 慧荣科技股份有限公司 Server and control method applied to server

Also Published As

Publication number Publication date
CN106469020B (en) 2019-08-09

Similar Documents

Publication Publication Date Title
CN107967124B (en) Distributed persistent memory storage system and method
CN103049397B (en) A kind of solid state hard disc inner buffer management method based on phase transition storage and system
CN105900069B (en) The predictive of the data being stored in flash memories is prefetched
KR102462305B1 (en) Method and apparatus for improving read performance of a solid state drive
US20140129758A1 (en) Wear leveling in flash memory devices with trim commands
CN106775476A (en) Mixing memory system and its management method
CN105242871A (en) Data writing method and apparatus
CN105095116A (en) Cache replacing method, cache controller and processor
JP2006099763A5 (en)
CN102136993B (en) A kind of methods, devices and systems of Data Migration
CN109164975A (en) A kind of method and solid state hard disk writing data into solid state hard disk
CN106775474A (en) A kind of Nand Flash abrasion equilibrium methods, device and memory
CN106528443B (en) FLASH management system and method suitable for spaceborne data management
CN105940386A (en) Migrating data between memories
CN106445405A (en) Flash storage-oriented data access method and apparatus
US12073079B2 (en) Zone hints for zoned namespace storage devices
CN103365926A (en) Method and device for storing snapshot in file system
CN109697017A (en) Data memory device and non-volatile formula memory operating method
CN106844491A (en) A kind of write-in of ephemeral data, read method and write-in, reading device
CN110032474A (en) A kind of snapshot, which occupies, holds method for determination of amount, system and associated component
CN109324980A (en) A kind of L2P table management method, method for reading data, device and equipment
He et al. RTFTL: design and implementation of real-time FTL algorithm for flash memory
US20170052899A1 (en) Buffer cache device method for managing the same and applying system thereof
US20230259747A1 (en) Accelerator system for training deep neural network model using nand flash memory and operating method thereof
CN107193947A (en) A kind of file system cache incremental refreshment method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant