CN106469020B - Cache element and control method and its application system - Google Patents

Cache element and control method and its application system Download PDF

Info

Publication number
CN106469020B
CN106469020B CN201510511518.6A CN201510511518A CN106469020B CN 106469020 B CN106469020 B CN 106469020B CN 201510511518 A CN201510511518 A CN 201510511518A CN 106469020 B CN106469020 B CN 106469020B
Authority
CN
China
Prior art keywords
cache
data
sub
block
rank
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510511518.6A
Other languages
Chinese (zh)
Other versions
CN106469020A (en
Inventor
林业峻
李祥邦
王成渊
杨佳玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Macronix International Co Ltd
Original Assignee
Macronix International Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Macronix International Co Ltd filed Critical Macronix International Co Ltd
Priority to CN201510511518.6A priority Critical patent/CN106469020B/en
Publication of CN106469020A publication Critical patent/CN106469020A/en
Application granted granted Critical
Publication of CN106469020B publication Critical patent/CN106469020B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a kind of cache element (buffer cache device), by least one application program (application) or can take an at least data.Wherein, cache element includes: the first rank cache, second-order cache and controller.First rank cache is for receiving and storing this data.Second-order cache has the memory cell structure different from the first rank cache.The data write-in second-order cache that controller is used to be stored in the first rank cache.

Description

Cache element and control method and its application system
Technical field
The embodiment of the present invention relates to a kind of cache element (buffer cache device) and control method And its application system.In particular to a kind of hybrid cache element and control method with multi-level cache And its application system.
Background technique
Cache technology is by data by application program (applications) from main memory (bulk/main Memory it reads in) and is temporarily copied to and be set to than main memory more neighboring processor (Process Unit, PU) and can be into In the storage medium (rapidly-accessible storage media) that row quickly accesses, make processor can be from cache In quickly read data and need not be read from main memory again, accelerate the speed for reading and writing operation, with the system of saving Reaction and runing time (response/execution time).
Dynamic random access memory (Dynamic Random Access is usually used in known cache Memory, DRAM) make storage medium.However, dynamic random access memory is a kind of volatile memory (volatile Memory), stored data may close or without expected thrashing (sudden system because of electric current Crashes it) disappears.In order to ensure the stabilization of data, the data being stored in cache can be generally synchronously written (synchronous write) is into main memory.But, what this measure can lower processor reads and writees operating efficiency.
In order to improve this problem, has at present using nonvolatile memory (non-Volatile memory) and make For cache.Phase transition storage (Phase Change Memory, PCM), compared with flash memory, service speed with higher It is one of the nonvolatile memory of most prospect with tolerance level (endurance).However, phase transition storage service life (life time) is lower than dynamic random access memory, and is limited to the demand of write-in electric power, once can only at most be written in parallel Limited data volume, such as 32 bytes (bytes), easily cause write latency (write latency), are not appropriate for individually making For cache.
Therefore, in need that a kind of more advanced cache and control method and its application system are provided, known to improving Technology problem encountered.
Summary of the invention
Of the invention one can be obtained at least towards a kind of cache element is to provide by least one application program One data.This cache element includes: the first rank cache, second-order cache and controller.First rank high speed Caching is for receiving and storing this data.Second-order cache has the storage unit different from the first rank cache Structure.The data write-in second-order cache that controller is used to be stored in the first rank cache.
Another of the invention is towards the control method for being to provide a kind of cache element.Wherein, cache element Second-order cache including the first rank cache and with the memory cell structure different from the first rank cache, The control method of this cache element includes following rapid: being obtained first by the first application program and temporarily stores a data In the first rank cache.And then second-order cache is written into this data.
Another of the invention is towards being to provide a kind of embedded system (embedded system).This embedded system It include: main memory element, cache element and controller.Cache element includes passing through at least one application program The first rank cache of this data is received and stores, and with the memory cell structure different from the first rank cache Second-order cache.The data write-in second-order cache that controller is used to be stored in the first rank cache; And then main memory is written into the data for being stored in second-order cache.
According to above-mentioned, the embodiment of the present invention is to provide a kind of hybrid high speed that multi-level memory cache is constituted Cache the embedded system of element and such cache element of application.Wherein this hybrid cache element is at least wrapped Second-order cache containing the first rank cache and with the memory cell structure different from the first rank cache.It will At least one data obtained by least one application program are first stored in the first rank cache, and are write by class type (hierarchically write-back) mode of returning, then second-order is written into the data being stored in the first rank cache In cache.It solves known technology and storage medium of the dynamic random access memory as cache is used alone, cause The unstable problem of data.
In some embodiments, and by not updating sub-block management (sub-dirty block management) it solves Certainly known phase transition storage cache, but because derived from parallel write-in data volume deficiency and write latency problem.In addition, more The data that minimum activation (Least-Recently-Activated, LRA) can be used replace strategy (replacement Policy), the operation efficiency of Lai Zengjin embedded system.
Detailed description of the invention
In order to be clearer and more comprehensible to the above embodiment of the present invention and other objects, features and advantages, spy lift it is several compared with Good embodiment, and cooperate institute's accompanying drawings, it is described in detail below:
Fig. 1 is the block schematic diagram of embedded system depicted in an embodiment according to the present invention;
Fig. 1 ' is the block schematic diagram of embedded system depicted according to another embodiment of the present invention;
Fig. 2 is the caching operations process block diagram of embedded system depicted in an embodiment according to the present invention;
Fig. 3 is the trade-off decision flow diagram of nearest minimum activation strategy depicted in an embodiment according to the present invention;
Fig. 4 is the flow diagram of background refresh operation depicted in an embodiment according to the present invention;
Fig. 5 is that an embodiment according to the present invention is painted Android smartphone under different buffering cache models, is adopted Caching operations, which are carried out, with different application emulates obtained input/output reaction time histogram;And
Fig. 6 is that an embodiment according to the present invention is painted Android smartphone under different buffering cache models, is adopted Caching operations, which are carried out, with different application emulates obtained application program runing time histogram.
[symbol description]
100: embedded system 100 ': embedded system
101: main memory element 102: cache element
102a: the first rank cache 102b: second-order cache
102c: control unit 103: controller
104, app1, app2, app3: application program
105: virtual program system or programming system
106: driver
107A, 107B, block 1, block 2: block
107A0,107B0: position is not updated
107A1~16,107B1~16: secondary not update position
1A~16A, 1B~16B: sub-block
201: input/output is required into write-in second-order cache.
202: replacing strategy using data to select to have been written into the not more new block in second-order cache, and will be selected In the not more new block write-in main memory element selected.
203: the refreshing instruction issued according to controller will be written the not more new block in second-order cache and be written In main memory element.
401: monitoring be stored in the quantity n for not updating sub-block in second-order cache, the first rank cache it is fast Take hit rate and the free time for being stored in data in second-order cache ways.
402: when the quantity, cache hit rate and free time thrin that do not update sub-block are higher than preset standard, Carry out background refresh operation.
403: clock synchronization is required when receiving an instruction, stops background refresh operation immediately, instruction is first completed and requires, Zhi Houzai Again it monitors.
501,502,503,504,503: the standardization input/output reaction time.
601,602,603,604,603: standardization application program runing time
I/O: input/output requires n: not updating the quantity of sub-block
α: cache hit rate t: free time
Sn、Sα、St: preset standard App ID: application program identification
Specific embodiment
The present invention is to provide a kind of hybrid cache element and the embedded system using such cache element System and control method can improve known exclusive use dynamic random access memory or as variation memory is as cache Data caused by storage medium are unstable with write latency problem.It is cited below particularly several at least two kinds different storage units The hybrid cache element and apply the embedded of such cache element that the multi-level memory of structure is constituted System and control method cooperate institute's accompanying drawings to elaborate as preferred embodiment.
But it must be noted that these specific case study on implementation and method, be not intended to limit the invention.The present invention still may be used It is implemented using other features, element, method and parameter.The it is proposed of preferred embodiment is only of the invention to illustrate Technical characteristic, the scope of the claims being not intended to limit the invention.Have usually intellectual in the technical field, it can basis The description of following description is not departing from scope of the invention, makees impartial modification and variation.Different embodiments with Among schema, identical element will be indicated with identical component symbol.
Fig. 1 is please referred to, Fig. 1 is the block schematic diagram of embedded system 100 depicted in an embodiment according to the present invention. This embedded system 100 includes: main memory element 101, cache element 102 and controller 103.Of the invention one In a little embodiments, main memory element 101 can be a kind of flash memory (flash memory), and but not limited to this.Such as at this In other embodiments of invention, main memory element 101 can be memory disk (disk), embedded multi-media card (embedded Multi-Media Card, eMMC), solid state hard disk (Solid State Disk, SSD) or other possible storages Deposit medium.
102, cache member includes the first rank cache 102a and second-order cache 102b.Wherein, second Rank cache 102b has the memory cell structure different from the first rank cache 102a.In some embodiments of the present invention In, the first rank cache 102a can be dynamic random access memory;And second-order cache 102b can be variation Memory, but not limited to this.Such as in other embodiments of the invention, the first rank cache 102a can be variation Dynamic ram memory;And second-order cache 102b can be dynamic random access memory.
In other words, as long as the memory cell structure of the first rank cache 102a and second-order cache 102b is different, In some embodiments, the first rank cache 102a and second-order cache 102b can be respectively selected from spin transfer power Square random access memory (Spin Transfer Torque Random Access Memory, STT-RAM), reluctance type with Machine accesses memory (Magnetoresistive Random Access Memory, MRAM), variable resistance type memory (Resistive Random Access Memory, ReRAM) or other possible storage mediums.
Controller 103 can be used to from least one application program 104 provided by user's space (user space), warp By virtual program system (Virtual File system, VFS) or programming system 105, at least one data, such as one are obtained The input/output (Input/Output) of a application program 104 requires I/O, and this input/output requirement I/O is stored in Among first rank cache 102a.Also, a kind of class type write-back method is provided, the first rank cache 102a will be stored in Data be written second-order cache 102b;Then, then driver 106 is penetrated, second-order cache will be stored in In the data write-in main memory element 101 of 102b.
Among some embodiments of the present invention, controller 103 can be the master operating system positioned at embedded system 100 Processor (showing as depicted in FIG. 1) in (host machine).But among other embodiments of the invention, controller 103 It is also possible to be built into cache element 102 control unit 102c.Fig. 1 ' is please referred to, Fig. 1 ' is according to the present invention Another embodiment depicted in embedded system 100 ' block schematic diagram.In this embodiment, input/output requires I/ The caching operations of O are directly controlled by cache element 102, rather than by being set to 100 ' main operation system of embedded system Controller 103 in system carries out.
Referring to figure 2., Fig. 2 is the caching operations stream of embedded system 100 depicted in an embodiment according to the present invention Journey block diagram.In a preferred embodiment of the invention, the caching operations of embedded system 100 are mentioned by controller 103 It is rapid to carry out following that the class type of confession writes back program: (1) requiring I/O by the first rank (dirty) input/output not updated Second-order cache 102b is written in cache 102a (as depicted in arrow 201);(2) input/output not updated is wanted I/O is asked to be written in main memory element 101 (as depicted in arrow 202) as second-order cache 102b;And (3) are carried on the back Scape, which refreshes (background flush), will not update (such as Fig. 2 arrow in input/output requirement I/O write-in main memory element 101 Depicted in numbers 203).
In some embodiments of the invention, it is carrying out before class type writes back program, is further including to being stored in first Data (such as input/output requires I/O) in rank cache 102a and second-order cache 102b are not updated son Block management.It is rapid comprising following that sub-block write-in management is not updated: first that the first rank cache 102a and second-order is high Memory region in speed caching 102b divides into multiple sub-blocks, and making each sub-block includes that a part is stored in the first rank height Data in speed caching 102a and second-order cache 102b.Then, it recognizes and marks and be stored in each sub-block Whether a part of data are not update.
For example, the first rank cache 102a has block 107A and 107B by taking the first rank cache 102a as an example, it can Each block (such as block 107A or 107A) is divided into 16 sub-blocks 1A~16A and 1B~16B.Wherein, each It is slow that second-order high speed can be substantially equal to written in the size (granularity) of sub-block 1A~16A and 1B~16B in parallel Deposit the maximum amount of data of 102b.In the present embodiment, the size essence etc. of each sub-block 1A~16A and 1B~16B In 32 bytes, also can parallel recording phase change memory data volume.And each block 107A and 107B is 512 words Section.
In addition, each block 107A (or 107B) of the first rank cache 102a further includes a block mark position (dirty bit) 107A0 (or 107B0), multiple sub-blocks mark 107A1~16 position (sub-dirty bits) (or 107B1~ 16) and one is used to identify that the input/output being stored in block 107A (or 107B) requires the application program identification of I/O App ID.Wherein, each sub-block mark position 107A1~16 (or 107B1~16) be corresponding sub-block 1A~16 (or 1B~16), the part of I/O is required to indicate input/output stored in these sub-blocks 1A~16A (or 1B~16B) It whether is non-updated section, and require the sub-block of I/O non-updated section to be denoted as not updating sub-block for input/output is stored (sub-dirty block).Block mark position 107A0 and 107B0 are then to indicate block 107A or 107B corresponding to it In whether have do not update sub-block (dirty block).And will have and not update sub-block person and be denoted as not more new block.
For example, in the present embodiment, sub-block indicates position 107A1~16 and 107B1~16 respectively by respectively by 16 words It saves into, each sub-block mark position 107A1~16 and 107B1~16 respectively correspond sub-block 1A~16 and 1B~16B. The sub-block 3B for storing the part input/output requirement I/O not updated is denoted as not updating son by sub-block mark position 107B3 Block (is indicated) with the hachure being illustrated on sub-block 3B.Block mark position 107A0, which will not have, does not update sub-block then area Block 107A is denoted as having updated (clean is indicated with C);Block, which indicates position 107B0, will have the block for not updating sub-block B3 107B is denoted as not updating and (indicating with D).
Then, second-order cache is written by the first rank cache 102a in the input/output requirement I/O not updated 102b (as depicted in arrow 201).Due to the input/output requirement I/O being stored in the first rank cache 102a, only store up The part not updated in sub-block 3B is stored in not update.Therefore it may only be necessary to will be stored in the input in the sub-block 3B not updated/ Output requires in I/O partial write second-order cache 102b, will can be stored in originally non-volatile cache (dynamic Random access memory) in input/output require I/O be transferred in non-volatile cache (phase transition storage).
It adds, does not update the size of sub-block 3B, second-order cache 102b can substantially equal to be written in parallel The maximum amount of data of (phase transition storage).The non-update section of the input/output requirement I/O in not more new block 107B will be stored in The problem of dividing write-in second-order cache 102b, not will cause write latency.Cache element 102 can or else be influenced Under the premise of reaction is with runing time, achieve the purpose that take into account caching data stable.
It, can be according to the difference of embedded system 100 when having multiple not more new blocks in the first rank cache 102a Demand replaces strategy using different data, such as minimum activation strategy, clock method (CLOCK) strategy, First Come First Served recently (First-Come First-Served, FCFS) strategy or least recently used (Least-Recently-Used, LRU) plan Slightly, determine that not more new block 107B is written into the order of second-order cache 102b.In some embodiments of the invention, Will not more new block 107B be written second-order cache 102b after, further will be in the first rank cache 102a More new block is not emptied (evict), to allow the input/output requirement I/O of other applications to be stored in the block.
It among the present embodiment, is determined in write-in second-order cache 102b using nearest minimum activation strategy The not order of more new block.Wherein, so-called activation strategy minimum recently is to select at least to be set to prospect recently (foreground) program does not update input/output requirement I/O, among its preferential write-in second-order cache 102b, And it is emptied the not more new block that this does not update input/output requirement I/O is stored from the first rank cache 102a.Wherein, institute The foreground program of meaning then refers to and appears in device using embedded system 100, such as smartphone at present, display Program on picture.
Such as referring to figure 3., Fig. 3 is that the selection of minimum activation strategy is determined recently depicted in an embodiment according to the present invention Plan flow diagram.For the sake of briefly describing, it is assumed that the first rank cache of the embedded system 100 in the present embodiment Only there are 2 block block 1 and block 2 in 102a, for being stored respectively from three kinds of application programs app1, app2 and The input/output of app3 (with different shadings) requires I/O.When this three kinds of application programs app1, app2 and app3 are set each time Controller 103 is all arranged the block for storing these application programs according to the sequencing being accessed when being set to foreground program Column.First place in sequence is the block that storage is at most activated (Most-Recently Activated, MRA) recently, and Whipper-in is the block that storage is at least activated (LRA) recently, is also meeting by preferentially write-in second-order cache, and And the block emptied from the first rank cache 102a (is block Block1) among the present embodiment.
Then referring again to Fig. 2, the caching operations of embedded system 100 further include that will be stored in second-order cache Not more new data (such as the input/output requirement I/O being stored in do not update part) write-in main memory in 102b block Element 101, and empty the block that this not more new data is stored in second-order cache 102b.In some embodiments of the present invention In, main memory element 101 is written into the input/output requirement I/O that do not update being stored in second-order cache 102b Mode 1 includes two kinds of modes: one is replacing strategy using the preceding data, for example, for example recently minimum activation strategy, when Second-order cache will be written by more new block 107B in clock method strategy, First Come First Served strategy or least recently used summary In 102b, and empty selected not more new block 107B (as depicted in arrow 202).It is another then be carry out background refreshing (background flush) will be all in second-order cache 102b according to the refreshing instruction that controller 103 is issued More new block 107B is not written in main memory element 101, then empties the new district Wei Geng all in second-order cache 102b Block 107B (as depicted in arrow 203).Since the write-in for replacing strategy to be carried out using data has been disclosed with operating method is emptied As before, therefore, not repeat them here.
Referring to figure 4., Fig. 4 is the flow diagram of background refresh operation depicted in an embodiment according to the present invention. During caching operations, controller 103, which can monitor to be stored in second-order cache 102b, does not update sub-block (such as not more New sub-block 3B) quantity n, the first rank cache 101a cache hit rate (hit rate) α and be stored in second-order height Free time (idle time) t of speed caching (as depicted in step 401).When quantity n, the cache hit for not updating sub-block Rate α and free time t thrin are higher than preset standard (n > Sn, α > SαOr t > St) when, controller 103 just will do it background Refresh operation will be written positioned at all not more new block 107B of second-order cache 102b to main memory element 101, it Empty all not more new block 107B in second-order cache 102b afterwards (as depicted in step 402).
Due to, when be stored in second-order cache 102b the quantity n for not updating sub-block, the first rank cache When the free time t of the cache hit rate α or second-order cache 102b of 101a are higher than preset standard, second-order high speed is represented Caching 102b is in more idle state, and the data for being stored in second-order cache 102 are more seldom deposited by application program It takes.Using this neutral gear, main memory element 101 is written into the data more seldom accessed by application program, and vacate second Storage space in rank cache 102b, would not cause the work load of cache element 102.
And it is worth noting that, is required in carrying out background refreshing when controller 103 receives another instruction (demand request) and when being accessed to the data for being stored in second-order cache 102b.Controller 103 can immediately Stop background refurbishing procedure, first completes this instruction and require and then again to being stored in second-order cache 102b not more New the quantity n of sub-block, the cache hit rate α of the first rank cache 101a and it is stored in 102 block of second-order cache The free time t of data is monitored (as depicted in step 403) in 107A and 107B.
Later, being compared by analogy method includes hybrid cache element 102 provided by the embodiment of the present invention With the efficiency of known cache storage element.In one embodiment of this invention, using known Android smartphone conduct Platform carries out simulation comparison, this analogy method includes following rapid: firstly, collect in Android smartphone not into Before the storage of row cache, including program identifier (process ID), inode program code (inode number), reads/write Enter/refresh (read/write/fsync/flush), I/O Address (I/O address), size of data (size), when Between stab (timestamp) ... the access parameter (access trace) waited.These access parameters are put into tracking driving buffering again Cache emulator (trace-driven buffer cache simulator), emulation different cache element collocation are different Cache model is buffered, to obtain the access parameter of emulation caching operations.The access parameter that simulation is generated again is as input/output It loads in (I/O workloads I) input Android smartphone, to compare Android smartphone for difference Application program carries out efficiency when caching operations using different buffering cache models.
For analog result as depicted in Fig. 5 and Fig. 6, Fig. 5 is that an embodiment according to the present invention is painted the intelligent hand of Android Machine carries out caching operations using different application and emulates obtained input/output reaction under different buffering cache models Time histrogram.Fig. 5 separately includes 5 groups of strip pillar collection (subsets), respectively represents Android smartphone use and answers Obtained mould after being emulated with program Browser, Facebook, Gmail and Fliboard with different buffering cache models Quasi- result and its average value Average.And each strip column combination includes 5 strip columns 501,502,503,504 and 505, It respectively represents and individually uses dynamic random access memory as the buffering cache model of the storage medium of cache (with DRAM Indicate), the buffering cache model (being indicated with PCM), independent that individually use phase transition storage as the storage medium of cache Using embodiment of this case provide hybrid cache element 102 as cache storage medium buffering cache model (being indicated using Hybrid), using hybrid cache element 102 as cache, and arrange in pairs or groups and do not update sub-block write-in pipe The buffering cache model (being indicated using Hybrid+sub) and this hybrid cache element 102 of use of reason are slow as high speed Deposit, collocation do not update sub-block write-in management and refresh operation buffering cache model (being indicated with Hybrid+sub+BG), into The obtained standardization input/output reaction time after row simulation caching operations.
Among the present embodiment, simulation the result is that with individually use dynamic random access memory buffering cache model (DRAM) carrying out emulating the obtained input/output reaction time is standardized.The analog result according to depicted in Fig. 5 can be with It was found that individually using hybrid high speed compared to the buffering cache model (DRAM) for individually using dynamic random access memory The buffering cache model (Hybrid) of caching element 102 can make to standardize input/output reaction time average value reduction 7%; The buffering cache model (Hybrid+Sub) of sub-block write-in management is not updated using hybrid cache element 102 and collocation It can make to standardize input/output reaction time average value reduction 13%;Using this hybrid cache element 102 and arrange in pairs or groups Do not update sub-block write-in management and refresh operation buffering cache model (Hybrid+Sub+BG) can then make to standardize it is defeated Enter/output-response time average reduction 20%.It has been shown that, the hybrid cache element 102 provided using embodiment of this case As the storage medium of cache, when input/output when Android smartphone caching operations can be greatly decreased is reacted Between.
Fig. 6 is that an embodiment according to the present invention is painted Android smartphone under different buffering cache models, is adopted Caching operations, which are carried out, with different application emulates obtained runing time histogram.Fig. 6 separately includes 5 groups of strip column groups It closes, respectively represents Android smartphone and use application program Browser, Facebook, Gmail and Filpboard, with Difference buffering cache model carries out obtained analog result and its average value Average after emulation caching operations.And it is every One strip column combination includes 5 strip columns 601,602,603,604 and 605, respectively represents and is individually deposited using dynamic randon access Reservoir as the storage medium of cache buffering cache model (DRAM), individually using phase transition storage as cache Storage medium buffering cache model (PCM), individually using embodiment of this case provide hybrid cache element 102 make For the buffering cache model (Hybrid) of the storage medium of cache, using hybrid cache element 102 as high speed Caching, and arrange in pairs or groups and do not update the buffering cache model (Hybrid+Sub) of sub-block write-in management and use this hybrid high speed It caches element 102 and is used as cache, and arrange in pairs or groups and do not update the buffering cache model of sub-block write-in management and refresh operation (Hybrid+Sub+BG), the obtained standardization application program runing time after carrying out simulation caching operations.
Among the present embodiment, simulation the result is that with individually use dynamic random access memory buffering cache model (DRAM) carrying out emulating the obtained application program runing time time is standardized.The result as depicted in Fig. 6 can be sent out It is existing, buffering cache model (DRAM) phase of dynamic random access memory as the storage medium of cache is used with independent Than being used as cache using hybrid cache element 102, and arrange in pairs or groups not updating sub-block write-in and manage and refresh with background The buffering cache model (Hybrid+Sub+BG) of operation can make normalized running time average value reduce 12.5%.With it is independent Buffering cache model (DRAM) using phase change memory component as the storage medium of cache is compared, using hybrid height Speed caching element 102 is used as cache, and arranges in pairs or groups and do not update the buffering cache model (Hybrid+ of sub-block write-in management Sub normalized running time average value) can be made to reduce 12.3%.Display is slow using the hybrid high speed that embodiment of this case provides Storage medium of the element 102 as cache is deposited, when the application program operation of Android smartphone can be greatly reduced Between.
According to above-mentioned, the embodiment of the present invention is to provide a kind of hybrid high speed that multi-level memory cache is constituted Cache the embedded system of element and such cache element of application.Wherein this hybrid cache element is at least wrapped Second-order cache containing the first rank cache and with the memory cell structure different from the first rank cache.It will At least one data obtained by least one application program are first stored in the first rank cache, and are write by class type The mode of returning, then the data being stored in the first rank cache are written in second-order cache.It is independent to solve known technology Dynamic random access memory is used as the storage medium of cache, the problem for causing data unstable.
In some preferred embodiment, dynamic random access memory and phase transition storage can be used respectively as the first rank Cache and second-order cache.And before carrying out class type and writing back, first the first rank cache is carried out not more New sub-block write-in management, and background refreshing is carried out to second-order cache in class type writes back, it uses to solve Know individually using storage medium of the phase transition storage as cache, is but written derived from data volume deficiency and writes because parallel Enter delay issue.In addition, the data at least activated, which more can be used, replaces strategy, the operation efficiency of Lai Zengjin embedded system.
Although the present invention has been disclosed as a preferred embodiment, however, it is not to limit the invention, any technology neck Have usually intellectual in domain, without departing from the spirit and scope of the present invention, when can make some changes and embellishment, therefore this Subject to the protection scope of invention ought be defined depending on appended claims range.

Claims (12)

1. a kind of cache element can obtain one first data by an application program, wherein the cache element includes:
One first rank cache, for receiving and storing first data;
One second-order cache has a memory cell structure different from the first rank cache;And
One controller, for the second-order cache is written in first data being stored in the first rank cache;
Wherein, which includes multiple blocks, these each blocks include:
Multiple sub-blocks, these each sub-blocks are used to store a part of first data;
Multiple sub-blocks indicate position, these corresponding each sub-blocks, to indicate in these corresponding sub-blocks whether store this An at least non-updated section in first data, and will be had the non-updated section person to be denoted as one and not update sub-block;With And
Whether one block indicates position, there is this not update sub-block to indicate in the block;
The controller can monitor the quantity for not updating sub-block being stored in the second-order cache, first rank high speed One cache hit rate of caching or the free time for being stored in the second-order cache;When the quantity, the cache hit rate When being higher than a preset standard with the free time thrin, all this being stored in the second-order cache is not updated A main memory element is written in sub-block;
Wherein the first rank cache is a dynamic random access memory, which is a phase change memory Device;These each sub-blocks have a size, equal to the maximum amount of data that the second-order cache can be written in parallel.
2. cache element according to claim 1, wherein the first rank cache can be used to receive and store one Second data, the controller are nearest using a clock method strategy, a least recently used strategy, a First Come First Served strategy and one The one of minimum activation strategy, to select first data being stored in the first rank cache or second data to be written Second-order cache, then first data or second data selected are emptied, to allow a third data storage in this In first rank cache.
3. cache element according to claim 2, wherein the nearest minimum activation strategy is to select recently at least quilt First data or second data that one prospect application program is accessed.
4. cache element according to claim 2, the controller can using the clock method strategy, this at least makes recently It is slow to select to be stored in second-order high speed with the one of strategy, the First Come First Served strategy and the nearest minimum activation strategy A main memory element is written in first data or second data in depositing, then empties and selected in the second-order cache First data or second data.
5. a kind of control method of cache element, wherein the cache element includes one first rank cache and one Second-order cache, wherein the second-order cache has a storage unit knot different from the first rank cache The control method of structure, the cache element includes:
One first data are obtained by one first application program and are stored in the first rank cache;And
The second-order cache is written into first data being stored in the first rank cache;And
The first rank cache is partitioned into multiple blocks, makes these each blocks include:
Multiple sub-blocks, these each sub-blocks are used to store a part of first data;
Multiple sub-blocks indicate position, these corresponding each sub-blocks, to indicate in these corresponding sub-blocks whether store this An at least non-updated section in first data, and will be had the non-updated section person to be denoted as one and not update sub-block;With And
Whether one block indicates position, there is this not update sub-block to indicate in the block;
Monitoring is stored in a cache of the quantity, the first rank cache that do not update sub-block in the second-order cache Hit rate and the free time for being stored in first data in the second-order cache;And
When the quantity, the cache hit rate and the free time thrin are higher than a preset standard, a background refreshing is carried out All this being stored in the second-order cache is not updated sub-block and a main memory element is written by operation;And
Emptying in the second-order cache, there is this not update the block of sub-block;
Wherein, which is a dynamic random access memory, which is a phase change memory Device;These each sub-blocks have a size, equal to the maximum amount of data that the second-order cache can be written in parallel.
6. the control method of cache element according to claim 5, wherein the second-order is written in first data The step of cache, including this is not updated into sub-block and replicates and is stored in the second-order cache.
7. the control method of cache element according to claim 5, further includes:
When the instruction of reception one requires, that is, stop the background refresh operation;
And complete the instruction requirement;And
Monitor the quantity, the cache hit rate and the free time.
8. the control method of cache element according to claim 5, further includes:
One second data are obtained by one second application program and are stored in one first rank cache;
Using a clock method strategy, a least recently used strategy, a First Come First Served strategy or a nearest minimum activation strategy One, to select first data being stored in the first rank cache or second data, be written into this second Rank cache;
Empty first data or second data that second data being stored in the first rank cache are selected;With And
A third data are obtained by a third application program and are stored in the first rank cache.
9. the control method of cache element according to claim 8, wherein the nearest minimum activation strategy is selection First data or second data are at least accessed by a prospect application program recently.
10. the control method of cache element according to claim 8, further includes:
Using a clock method strategy, a least recently used strategy, a First Come First Served strategy or a nearest minimum activation strategy One be written into a main memory to select first data being stored in the second-order cache or second data Memory element;And
Empty first data or second data that second data being stored in the second-order cache are selected.
11. a kind of embedded system, comprising:
One main memory element;
One cache element, comprising:
One first rank cache, for receiving and storing at few one first data by an at least application program;And
One second-order cache has a memory cell structure different from the first rank cache;And
One controller, for the second-order cache is written in first data being stored in the first rank cache; And then the main memory is written into one first data being stored in the second-order cache;
Wherein, which includes multiple blocks, these each blocks include:
Multiple sub-blocks, these each sub-blocks are used to store a part of first data;
Multiple sub-blocks indicate position, these corresponding each sub-blocks, to indicate in these corresponding sub-blocks whether store this An at least non-updated section in first data, and will be had the non-updated section person to be denoted as one and not update sub-block;With And
Whether one block indicates position, there is this not update sub-block to indicate in the block;
The controller can monitor the quantity for not updating sub-block being stored in the second-order cache, first rank high speed One cache hit rate of caching or the free time for being stored in the second-order cache;When the quantity, the cache hit rate When being higher than a preset standard with the free time thrin, all this being stored in the second-order cache is not updated The main memory element is written in sub-block;
Wherein the first rank cache is a dynamic random access memory, which is a phase change memory Device;These each sub-blocks have a size, equal to the maximum amount of data that the second-order cache can be written in parallel.
12. embedded system according to claim 11, wherein the controller is built among the cache element.
CN201510511518.6A 2015-08-19 2015-08-19 Cache element and control method and its application system Active CN106469020B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510511518.6A CN106469020B (en) 2015-08-19 2015-08-19 Cache element and control method and its application system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510511518.6A CN106469020B (en) 2015-08-19 2015-08-19 Cache element and control method and its application system

Publications (2)

Publication Number Publication Date
CN106469020A CN106469020A (en) 2017-03-01
CN106469020B true CN106469020B (en) 2019-08-09

Family

ID=58214916

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510511518.6A Active CN106469020B (en) 2015-08-19 2015-08-19 Cache element and control method and its application system

Country Status (1)

Country Link
CN (1) CN106469020B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11366618B2 (en) * 2020-03-02 2022-06-21 Silicon Motion, Inc. All flash array server and control method thereof

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040103251A1 (en) * 2002-11-26 2004-05-27 Mitchell Alsup Microprocessor including a first level cache and a second level cache having different cache line sizes
US7734873B2 (en) * 2007-05-29 2010-06-08 Advanced Micro Devices, Inc. Caching of microcode emulation memory
US8255620B2 (en) * 2009-08-11 2012-08-28 Texas Memory Systems, Inc. Secure Flash-based memory system with fast wipe feature
CN101989183A (en) * 2010-10-15 2011-03-23 浙江大学 Method for realizing energy-saving storing of hybrid main storage
US8688914B2 (en) * 2011-11-01 2014-04-01 International Business Machines Corporation Promotion of partial data segments in flash cache
CN103207799B (en) * 2013-04-23 2016-04-06 中国科学院微电子研究所 A kind of closedown method of computer system, starting-up method, Apparatus and system
KR102088403B1 (en) * 2013-08-08 2020-03-13 삼성전자 주식회사 Storage device, computer system comprising the same, and operating method thereof
CN106951392A (en) * 2013-11-12 2017-07-14 上海新储集成电路有限公司 A kind of quick startup low-power consumption computer on-chip system with self-learning function

Also Published As

Publication number Publication date
CN106469020A (en) 2017-03-01

Similar Documents

Publication Publication Date Title
CN103049397B (en) A kind of solid state hard disc inner buffer management method based on phase transition storage and system
CN106469029B (en) Data hierarchy storage processing method, device and storage equipment
CN104115134B (en) For managing the method and system to be conducted interviews to complex data storage device
CN105900069B (en) The predictive of the data being stored in flash memories is prefetched
CN103559138B (en) Solid state hard disc and space management thereof
US20180067660A1 (en) Storage apparatus and storage control apparatus
US9367262B2 (en) Assigning a weighting to host quality of service indicators
US20090210464A1 (en) Storage management system and method thereof
CN105242871A (en) Data writing method and apparatus
CN105095116A (en) Cache replacing method, cache controller and processor
WO2014074449A2 (en) Wear leveling in flash memory devices with trim commands
KR102443600B1 (en) hybrid memory system
US10430329B2 (en) Quality of service aware storage class memory/NAND flash hybrid solid state drive
JP2006099763A5 (en)
CN104899154B (en) The page management method hosted is mixed based on embedded system
CN106709025A (en) Method and device for updating aggregation objects
US10528285B2 (en) Data storage device and method for operating non-volatile memory
CN110276454A (en) Method and electronic system for the system and control of the machine learning system
KR20150062039A (en) Semiconductor device and operating method thereof
US20140195571A1 (en) Fast new file creation cache
CN106469020B (en) Cache element and control method and its application system
CN109324980A (en) A kind of L2P table management method, method for reading data, device and equipment
CN110209600B (en) CACHE space distribution method and system based on simplified LUN
US20170052899A1 (en) Buffer cache device method for managing the same and applying system thereof
CN116737613A (en) Mapping table management method and memory

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant