CN104899154A - Page management method based on embedded system mixed main memory - Google Patents

Page management method based on embedded system mixed main memory Download PDF

Info

Publication number
CN104899154A
CN104899154A CN201510315621.3A CN201510315621A CN104899154A CN 104899154 A CN104899154 A CN 104899154A CN 201510315621 A CN201510315621 A CN 201510315621A CN 104899154 A CN104899154 A CN 104899154A
Authority
CN
China
Prior art keywords
page
accessed
chained list
main memory
embedded system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510315621.3A
Other languages
Chinese (zh)
Other versions
CN104899154B (en
Inventor
蔡晓军
孙志文
贾智平
鞠雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN201510315621.3A priority Critical patent/CN104899154B/en
Publication of CN104899154A publication Critical patent/CN104899154A/en
Application granted granted Critical
Publication of CN104899154B publication Critical patent/CN104899154B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/958Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a page management method based on an embedded system mixed main memory. The embedded system mixed main memory is an embedded system PCM/DRAM mixed main memory, a CPU of an embedded system sends an access page request, an access of the main memory is performed if request data or an instruction is not in a cache, and the page management method is executed at the moment. The page management method comprises building a CLOCK linked list existing in a page of the mixed main memory and an LRU linked list stored data of which are metadata of the page of an internal memory removed from the CLOCK linked list, determining whether the page accessed by the request is stored in the mixed main memory of the embedded system, accessing the CLOCK linked list if the page accessed by the request is stored in the mixed main memory of the embedded system, determining a type of the page in the CLOCK linked list to perform change operation of page identification bit or page migration operation, entering the next step if the page accessed by the request is not stored in the mixed main memory of the embedded system, obtaining a free page as a storage space of the accessed page, accessing the LRU linked list, and calling a page insertion algorithm to insert the accessed page into the mixed main memory.

Description

Based on the page management method of embedded system mixing main memory
Technical field
The present invention relates to a kind of method of page management, particularly relate to a kind of page management method based on embedded system mixing main memory.
Background technology
Large data age, along with the appearance of multiple nucleus system and new application program and universal, computer system model is evolved into data-driven from calculating driving gradually.Therefore, jumbo internal memory is the key ensureing whole computer system performance.But traditional memory density based on DRAM is difficult to accomplish very large.Research shows, the density of following DRAM technique can not be less than 22nm.In addition, the working mechanism of DRAM determines it and (2ms) must refresh it in certain interval.These extra refresh operations are independent of the storage of data, but its energy ezpenditure can occupy more than 70% of whole DRAM energy ezpenditure in most applications.Cause as a result, the energy ezpenditure of traditional DRAM internal memory at least accounts for 40% of whole computer system.
Embedded system towards application-specific, the tailorable computer system of software and hardware.Because itself is intrinsic, there is strict requirement to the area of storer and energy ezpenditure.Therefore, the feature that the density of traditional DRAM is little and power consumption is large seriously limits Large Copacity DRAM memory applications among embedded system.And the Large Copacity internal memory in embedded system that appears as of phase transition storage (Phase Change Memory, PCM) provides new opportunity.
PCM is a kind of using chalkogenide (as Ge2Sb2Te5, Ge4Sb1Te5) as the nonvolatile semiconductor memory of storage medium, and when after system power failure, the data of its storage unit inside can not be lost.Chalkogenide is a kind of phase-change material, can present crystallization and non-crystalline state under different voltage.Crystalline state has lower resistance value and has higher resistance value at non-crystalline state.Utilize this characteristic, the binary one and 0 in computer system can be stored respectively.In addition, in order to the density of larger increase PCM is to reduce chip area.Industry member utilizes other resistance states between phase-change material high-impedance state and low resistance state to realize storing up multiple binary digit a memory cell.This PCM technique is referred to as a Multi-Level Cell (MLC), and corresponding with it, a SLC can only store a data.Because the characteristic of MLC is complicated, and write operation speed is much slower than SLC, and current research is based on SLC.Compared with traditional DRAM, PCM storer has the following advantages as internal memory:
(1) high density: be different from the DRAM of tradition based on capacitor array, PCM is using phase-change material as storage medium, and the distance between its each storage unit can be accomplished very little, and research shows, the density of PCM can break through 16nm within the following short time.As far back as 2012, Samsung just developed the 8GB phase transition storage that 20nm supports.And the density of memory cells of MLC PCM can be larger.
(2) low energy consumption: because PCM stores data with resistance value difference, therefore, there is not the refresh operation in traditional DRAM, considerably reduce the energy ezpenditure of storer.In addition, PCM inside does not have mechanical driving device, reduces energy consumption further.
(3) non-volatile: the content of PCM storage unit can remain unchanged after system power failure.Therefore, if using PCM as main memory, system state can power down be continued after system re-powers before and continue perform.The start-up time of effective reduction system, ensure that security and the consistance of data.
(4) anti-interference: traditional DRAM internal memory needs to adopt electric capacity as storage unit, and electric capacity all can exist withstand voltage problem, if voltage exceedes rated voltage, capacitor will damage.And such problem can not be there is in PCM.
In view of more than PCM advantage, it can be used as embedded system Large Copacity main memory to be one and well select.But current PCM storer still Shortcomings, is mainly manifested in the following aspects:
(1) limitedly write number of times: after PCM storage unit writes a certain amount of data, the instability that can become, to such an extent as to the data read are different with the data of write.Under normal circumstances, the number of times restriction of writing of PCM storage unit is approximately 107-108 to storage unit.
(2) write delay is long: need to maintain the longer time because crystalline material state changes.Therefore, in storage unit, write relative to traditional DRAM, PCM the time that data need more to grow.
(3) write operation of higher dynamic energy consumption: PCM makes crystalline material state change due to needs through heavy current heating and quick short going out, and the power dissipation ratio read operation of write operation is improved greatly.When carrying out data write to PCM storage unit, more energy can be consumed than traditional DRAM.
Following table shows PCM, the various parameter comparison of DRAM and NAND Flash:
Effectively avoid respective shortcoming to can have while playing PCM and DRAM advantage simultaneously, academia proposes the mixing main memory framework (Hybrid Main Memory Architecture) of PCM and DRAM.In the architecture, PCM and DRAM is arranged in two linear address spaces arranged side by side.Operating system can be distinguished these two spaces and most write operation is arranged in DRAM completes and read operation completes at PCM.The advantage of the write delay that this framework can utilize DRAM low simultaneously and dynamic power consumption and the advantage of PCM high density and quiescent dissipation, effectively can avoid again both shortcomings simultaneously.
In the architecture, in order to avoid producing too much write operation in PCM, need on the basis of legacy operating system page management, increase page migration strategy.Often moved to DRAM by the page (write-hot page) write by being arranged in PCM, and by DRAM seldom by the page migration write in PCM with efficiency utilization dram space more.But, how to predict that the feature of writing of the following page is a very difficult thing.Although the research of academia in this year to the page management of mixing main memory is a lot, the following shortcoming of ubiquity:
(1) page frequently will be read all the time place and move in PCM:
Writing in intensive application, this strategy is feasible; But reading in intensive application, DRAM storage space can not get effective use, and the PCM caused due to the problem of page miss writes number of times and can greatly increase.
(2) before page-management policies is run, user is needed to determine the value of a lot of parameter:
Number of times of writing as the PCM page exceedes how many times and moves to DRAM; For different application programs, the value of these parameters is often difficult to determine.
(3) effectively can not predict and write page frequently:
After strategy detects that a PCM page is frequently write, moved to DRAM, but after this this page is no longer write the migration that can make the mistake; Or after strategy detects that a DARM page is seldom write, moved to PCM, but after this this page is often write the migration that also can make the mistake.
Summary of the invention
In order to make up defect and the deficiency of prior art existence, the present invention proposes a kind of page management method based on embedded system mixing main memory, the method is for the PCM/DRAM mixing main memory in embedded system, greatly can extend the life-span of mixing main memory, the execution reducing application program postpones and reduces the energy consumption of whole memory system.
In order to reach above object, technical scheme of the present invention is as follows:
A kind of page management method based on embedded system mixing main memory, described embedded system mixing main memory is embedded system PCM/DRAM mixing main memory, the CPU of embedded system sends the request of accession page, if the request msg of CPU or instruction are not in the buffer, then perform access main memory and carry out page management, comprise the following steps:
Step (1): it is the LRU chained list of the metadata of the page shifting out internal memory from CLOCK chained list that structure is present in the CLOCK chained list of the page and the data of storage in mixing main memory;
Step (2): judge whether the accessed page of request is stored in the mixing main memory of embedded system, if be stored in the mixing main memory of embedded system, then access CLOCK chained list, and judge that the type of the page in CLOCK chained list carries out change operation or the page migration operation of page iden-tity position; If be not stored in the main memory of embedded system, then enter step (3);
Step (3): obtain the storage space of a free page as the accessed page, and access LRU chained list, the more accessed page is inserted in mixing main memory by invoking page insertion algorithm.
Flag in described step (2) comprises: access position, write position and suggestion position.
The type of accessed page-hit in described step (2), comprises the page accessed recently and the often accessed page, is stored in respectively in CLOCK chained list T1 and CLOCK chained list T2.
The deterministic process of the described often accessed page is:
If become 1 from the access position of the page, again conducted interviews to this page before becoming 0, then this page is often the accessed page.
The deterministic process of the described page accessed is recently:
If the access position of some pages of hit is 1, then this page is the page accessed recently.
When the type of accessed page-hit is the accessed recently page being positioned at CLOCK chained list T1, judge whether the access position of the accessed page is 1, if the access position of the accessed page is 1, then by this page migration to the tail end of chained list T2, page access terminates;
If the access position of the accessed page is 0, then judge the request whether write operation of the accessed page, if so, then write position and the access position of the accessed page are all set to 1, page access terminates; Otherwise only the access position of the accessed page is set to 1, page access terminates.
When the type of accessed page-hit is the often accessed page being positioned at CLOCK chained list T2, judge the request whether write operation of the accessed page, if the request of the accessed page is not write operation, then only the access position of the accessed page is set to 1, page access terminates;
If the request of the accessed page is write operation, then what judge the accessed page writes position and accesses position whether be all set to 1, if not then write position and the access position of the accessed page are all set to 1, and page access terminates; If writing position and accessing position of accession page has been all 1, then judge whether the accessed page is stored in PCM, if the accessed page is stored in PCM, then by accessed page migration in DARM, otherwise the suggestion position of the accessed page is set to 1, and page access terminates.
LRU chained list in described step (3), comprises chained list B1 and chained list B2; Chained list B1 is used for depositing the page shifting out internal memory from chained list T1; Chained list B2 is used for depositing the page shifting out internal memory from chained list T2.
The process of described step (3) comprising:
Step (3.1): as accessed page-hit chained list B1, then the target capacity size distributing to chained list T1 adds 1, the more accessed page is inserted in mixing main memory by invoking page insertion algorithm, and be linked in T1, access terminates;
Step (3.2): as accessed page-hit chained list B2, then the target capacity size distributing to chained list T1 subtracts 1, the more accessed page is inserted in mixing main memory by invoking page insertion algorithm, and be linked in T2, access terminates;
Step (3.3): when the accessed page does not hit in chained list B1 and chained list B2, then the amount of capacity of chained list T1 is constant, and then the accessed page is inserted in mixing main memory by invoking page insertion algorithm, and be linked in T1, access terminates.
Chained list T1 in described step (3), chained list T2, chained list B1 and chained list B2 size meets following condition:
|T1|+|T2|≤S (1)
|T1|+|B1|≤S (2)
|T2|+|B2|≤2S (3)
0≤|T1|+|T2|+|B1|+|B2|≤2S (4)
| T 1 | + | T 2 | + | B 1 | + | B 2 | ≥ S ⇔ | T 1 | + | T 2 | = S - - - ( 6 )
Wherein, the value of S is total size of mixing main memory, and in units of page, each page of size is 4KB; | T1|, | T2|, | B1| and | B2| represents chained list T1 respectively, chained list T2, the amount of capacity of chained list B1 and chained list B2.
The invention has the beneficial effects as follows:
1) energy consumption of whole main storage system is reduced;
2) serviceable life of PCM storer is extended;
3) execution time of application program is decreased;
4) the method for the present invention can effectively reduce to write number of times to PCM under PCM/DRAM mixing stores; The page it is moved frequently is write in efficient prediction PCM; When after interior being filled with, according to the memory access patterns of application program, the memory pages of the effectively following less use of prediction, and removed internal memory;
5) simultaneously, for guaranteeing the adaptivity of strategy, do not need user to determine any parameter relevant with application program before runtime, the method for the present invention is used in operating system aspect.
Accompanying drawing explanation
Fig. 1 is the mixing main memory framework that page management method of the present invention adopts;
Fig. 2 is the page management method data structure that the present invention proposes;
Fig. 3 is page-management policies general flow chart;
Fig. 4 is page insertion algorithm process flow diagram;
Fig. 5 is page migration algorithm flow chart;
Fig. 6 is page replacement algorithm process flow diagram;
Fig. 7 is the average memory access latencies of the page management method that proposes of the present invention and CLOCK, LRU-WPAM and CLOCK-DWF management method;
Fig. 8 is that the mean P CM of the page management method that proposes of the present invention and CLOCK, LRU-WPAM and CLOCK-DWF management method writes number of times;
Fig. 9 is the average page migration number of times of the page management method that proposes of the present invention and CLOCK, LRU-WPAM and CLOCK-DWF management method;
Figure 10 is the average internal memory energy consumption of the page management method that proposes of the present invention and CLOCK, LRU-WPAM and CLOCK-DWF management method.
Embodiment
Below in conjunction with accompanying drawing, the page management method based on embedded system mixing main memory that the present invention proposes is described in further detail:
Fig. 1 shows the mixing main memory framework applied based on the page management method of embedded system mixing main memory that the present invention proposes.In the architecture, as a part for main memory the while of DRAM and PCM, but due to respective physical attribute difference, so be respectively equipped with different Memory Controller Hub-DRAM Controller and PCM Controller.And the operating system aspect on relative upper strata, DRAM and PCM is in same address space, is managed according to unified page mode by operating system.The present invention is namely in this operating system aspect design page-management policies.When CPU carries out memory access, first need to search L1Cache, if L1Cache does not hit, then search L2Cache.Here L2Cache is as the buffer memory of afterbody, i.e. LLC (Last Level Cache).When LLC does not also hit time, the accessing operation of main memory will be there is.Equally, when the page shifts out from LLC, the accessing operation of main memory can also be there is.
Fig. 2 shows the data structure used based on the page management method of embedded system mixing main memory that the present invention proposes.Data structure used mainly comprises four chained lists.Wherein, T1 is used for leaving in the page of access recently in internal memory; T2 is used for depositing the page often accessed in internal memory; B1 is used for depositing the page shifting out internal memory recently from T1; B2 is used for depositing the page shifting out internal memory recently from T2.If total size of mixing main memory is S, wherein mix total size of main memory in units of page, the page that then self-adaptation page-management policies can manage adds up to 2S page, wherein, T1 and T2 can manage S the actual page at main memory altogether, and B1 and B2 can manage the page of each history access of S altogether.In addition, in order to strategy can work better, T1, T2, B1, B2 meet as lower inequality:
|T1|+|T2|≤S (1)
|T1|+|B1|≤S (2)
|T2|+|B2|≤2S (3)
0≤|T1|+|T2|+|B1|+|B2|≤2S (4)
| T 1 | + | T 2 | + | B 1 | + | B 2 | ≥ S ⇔ | T 1 | + | T 2 | = S - - - ( 6 )
Formula (1) represents: the total length of T1 and T2 chained list is not more than total size of mixing main memory (PCM+DRAM).
Formula (2) represents: the total length of T1 and B1 chained list is not more than total size of mixing main memory (PCM+DRAM).The object done like this ensure that the page in all T1 and B1 can both move in T2.
Formula (3) represents: the total length of T2 and B2 chained list is not more than total size of the mixing main memory (PCM+DRAM) of mixing 2 times.The object done like this ensure that the page in all T1 and B1 can both move in T2.Ensure that strategy that the present invention proposes can manage the total size of mixing main memory of twice altogether in addition.
Formula (4) represents: the total length of T1, T2, B1 and B2 chained list is no more than the twice of total size of mixing main memory (PCM+DRAM).Being meant to of this formula, the strategy that the present invention proposes can manage the total size of mixing main memory of 2 times altogether.
Formula (5) represents: if total size of T1 and T2 does not reach maximum, namely mixes the replacement that the page does not occur main memory, then B1 and B2 chained list is all empty.
Formula (6) represents: if the total length size of T1, T2, B1 and B2 chained list is greater than the total size S of mixing main memory, so can infer that total size of T1 and T2 is S.
Wherein, the length of T1, T2, B1 and B2 chained list is in units of linked list element number; The total size S of mixing main memory is in units of page number.
All pages in main memory of operating system management are divided into two circular linked lists, and this circular linked list is CLOCK chained list.Element in a chained list is page accessed recently, is stored in chained list T1, and the number of times that namely these pages are accessed is little, but accessed mistake in the time shorter before this time point.Element in a chained list is often accessed page, is stored in chained list T2, although namely these pages may for a long time not accessed by CPU, is often being accessed by CPU before.In order to utilize historical information to predict the access module of the following page, set up two individual event chained lists, this single linked list is LRU chained list, is respectively B1 and B2.Each element in B1 chained list is the metadata of the page shifting out internal memory from T1, and each element in B2 chained list is the metadata of the page shifting out internal memory from T2.Wherein, metadata refers to and describes the data of page info, comprises page identifier, page pointer and the relevant zone bit of the page; If the page is in internal memory, then page pointer points to the specific address of the page in internal memory.
To each page in T1, access position (reference bit) is set and writes position (dirty bit).When a new page enters internal memory, first link this page with T1 and reference bit and dirty bit resets.When the page is accessed in T1, reference bit is 1; When the page in T1 by write access time, dirty bit is set to 1.Find out thus, reference bit represents nearest visit information, and dirty bit represents nearest write information.Equally, to each page in T2, reference bit and dirty bit is set.Its meaning is identical with the meaning in T1.After the page moves to T2 from T1, reference bit and dirty bit resets.In addition, suggestion position (suggest bit) is also set in T2.When the reference bit of the DRAM page of in T2 and dirty bit is 1, show that this page may for writing frequent page.If again there is write operation in this page, then the suggest bit of this page is set to 1.When suggest bit is 1, show, when this page is loaded into internal memory next time, should be put in DARM.When the page is replaced in B2 from T2, retain suggest bit invariant position.The visual description of T1, T2, B1, B2 and zone bit thereof as shown in Figure 2.
In the method that the present invention proposes, about the transition process of the page between T1 and T2 be: if the reference bit of a page in T1 is 1, become before 0 at it, in this page, again there is an internal memory read or write operation, show that this page is the frequent accessed page, then moved to the tail end of T2.
Fig. 3 shows the total process flow diagram of adaptive page replacement policy.It is using a memory access request as the beginning.If the page of this request, in mixing main memory, illustrates that page hits.When hit, if the reference bit of this page has been 1, illustrates that it is often accessed recently, should have been moved in T2.Otherwise the reference bit of this page is 0, in this case, if page request is read operation, then only its reference bit is set to 1, otherwise page request is write operation, its reference bit and dirty bit should be set to 1 simultaneously.So far, the page access process of page-hit T1 terminates.
When page-hit T2, now, if the access type of this page is read operation, then simple its reference bit is set to 1.But, if be operating as write operation, then check in the page, whether current dirty bit and reference bit is all 1, if not, it is all set to 1.If so, illustrate that this page is for writing frequent page, then check that the page is whether in DRAM, if so, is then set to 1 by its suggest bit.If in PCM, then invoking page transition process, moves to it in DRAM and stores.So far, the page access process of page-hit T2 terminates.
When the page not at internal memory, need to check whether hit history chained list, i.e. B1 or B2.If hit B1, this page should not deleted from T1 before explanation, that is, the target sizes distributing to T1 is too little, should add 1 to it.In like manner, if hit B2, this page should not deleted from T2 before explanation, that is, the target sizes distributed in T2 is too little, should add 1 (subtracting 1 to the target sizes of T1) to the target sizes of T2.If the page does not hit, then target sizes does not need to make any change.New page is inserted in mixing main memory by last invoking page insertion algorithm.So far, the page access process of not hitting for the page leaves it at that.Wherein, new page refers to the page that CPU is accessing, and the page data not in mixing main memory.This page data may be arranged in external memory or directly be produced by CPU.
Fig. 4 shows adaptive page insertion process process flow diagram.The page of memory access request, not when internal memory, needs this new page x to be inserted in internal memory.Now, the page insertion process under three kinds of different situations can be had.
(1) page insertion process of the first situation, be page x in B1, show this page be one can not be often accessed the page.Now, the page (if do not have free page, then invoking page is replaced algorithm and obtained a free page) that in mixing main memory, acquisition one is idle, utilizes this free page as the storage space of new page x, and is linked to the tail end of T1 chained list.
(2) page insertion process in the second situation, if page x is not namely in B1, also not in B2.Then according to principle of locality, if be write operation to the access of x, then this page following may often be write.Therefore, the DRAM page that acquisition one is idle from internal memory is needed to store x (if do not have the free dram page in internal memory, then invoking page replacement process obtains an idle DRAM page), utilize this page as the storage space of new page x, and be linked to the tail end of T1 chained list.On the contrary, if be read operation to the internal storage access of x, from mixing main memory, then obtain a DRAM page or the PCM page (if mixing main memory is full, then invoking page is replaced algorithm and is obtained an idle DRAM page or the PCM page), as the storage space of x, and be linked to the tail end of T1 chained list.
(3), in the third situation, page x is in B2.Now, need to check the suggest bit of page x in B2.If this position is 1, then need from mixing main memory, to obtain an idle DRAM page (if there is no idle DRAM page, then invoking page is replaced algorithm and is obtained an idle DRAM page), utilize this page as the storage space of new page x, and be linked to the tail end of T2 chained list.On the contrary, if be read operation to the internal storage access of x, from mixing main memory, then obtain a DRAM page or the PCM page (if mixing main memory is full, then invoking page is replaced algorithm and is obtained an idle DRAM page or the PCM page), as the storage space of x, and be linked to the tail end of T2 chained list.
For the page be inserted in above three kinds of situations in internal memory, all zone bits of its page are set to 0.
Fig. 5 shows the process flow diagram of page migration process.By a series of experimental verification, for write operation, write in the high page of frequency following a period of time and more trend towards being write.Therefore, the page migration process of the self-adaptation page management method of the present invention's proposition only occurs over just T2.Effectively raise the efficiency of page migration like this, and effectively decrease the generation of incorrect migration.The migration of so-called mistake, refers to that the page originally in PCM, after there occurs write operation many times, is moved in DRAM, but, but seldom again write in DRAM.In like manner, the page originally in DRAM, is seldom write, is moved in PCM, but, in PCM, but there occurs write operation repeatedly.These are all the migrations of mistake.In theory, the migration of mistake is unavoidable.When needing a PCM page p in T2 and needing to move in the DRAM page, utilize H drampointer search a dirty bit be 0 DRAM page q it can be used as the page carrying out mutual page migration between p.Utilizing H dramcarry out in the process that the page searches.If DRAM or the PCM page dirty bit run into is 1, be then set to 0.In the transition process of the page, it should be noted that the relative position of two page p and q in T2 chained list remains unchanged, be only that its storage medium changes.
Fig. 6 shows the process flow diagram of page replacement process.Therefore.Page replacement algorithm in the page management method that the present invention proposes needs when not needing to arrange any user's defined parameters, selects the page to replace flexibly based on " recency " (access time) and " frequency " (access frequency).T1 and T2 chained list represents the page that the page of recently access and most frequentation are asked respectively.When needs carry out page replacement, whether be greater than the target sizes for it is arranged by the actual size of contrast T1 chained list, i.e. TargetSize.If be greater than TargetSize, then need to replace a page from T1.In page replacement process, need to replace the DRAM page if specified, then utilize the H in T1 drampointer select reference bit and dirty bit be simultaneously 0 cold page replace, in the process of searching the type page, if be 0 when reference bit is different with dirty bit, then it is all set to 0, and by pointed the next one page.And if do not specify and need to replace DRAM page, then utilize the H in T1 replacepointer select reference bit be 0 page replace, in the process of searching the type page, if reference bit is not 0, be then set to 0, and by next for the pointed page.
When the actual size of T1 chained list is not more than the size TargetSize for its setting, then need from T2, to replace out a page in external memory.This process and to replace the process of a page from T1 similar, does not repeat at this.
What the present invention solved is the page management problem operating system level under mixing main memory, number of times is write by what propose that a kind of page management method based on embedded system mixing main memory effectively reduces PCM, page migration number of times between PCM and DRAM, improves the hit rate of the page when memory size is fixed.
By amendment linux kernel, realize the page-management policies of the present invention's proposition and verify in GEM5+NVMain+FlashSim comprehensive simulating device.Wherein, GEM5 can run (SuSE) Linux OS, and NVMain can emulate every physical characteristics of DRAM and PCM accurately.And FlashSim can emulate the most front state-of-the-art Flash memory technology.
The experimental result of page management method AIMR relative to other other page management method CLOCK, LRU-WPAM and CLOCK-DWF mixing main memory based on embedded system of the present invention, as is seen in figs 7-10.Result shows, compared to the mixing main memory page-management policies existed at present, the adaptive page-management policies that the present invention proposes greatly can extend the serviceable life of mixing main memory, and the execution reducing application program postpones and reduces the energy consumption of whole memory system.
By reference to the accompanying drawings the specific embodiment of the present invention is described although above-mentioned; but not limiting the scope of the invention; one of ordinary skill in the art should be understood that; on the basis of technical scheme of the present invention, those skilled in the art do not need to pay various amendment or distortion that creative work can make still within protection scope of the present invention.

Claims (10)

1. the page management method based on embedded system mixing main memory, described embedded system mixing main memory is embedded system PCM/DRAM mixing main memory, the CPU of embedded system sends the request of accession page, if the request msg of CPU or instruction are not in the buffer, then perform access hosting operations, page management method starts to perform, and it is characterized in that, comprises the following steps:
Step (1): it is the LRU chained list of the metadata of the page shifting out internal memory from CLOCK chained list that structure is present in the CLOCK chained list of the page and the data of storage in mixing main memory;
Step (2): judge whether the accessed page of request is stored in the mixing main memory of embedded system, if be stored in the mixing main memory of embedded system, then access CLOCK chained list, and judge that the type of the page in CLOCK chained list carries out change operation or the page migration operation of page iden-tity position; If be not stored in the main memory of embedded system, then enter step (3);
Step (3): obtain the storage space of a free page as the accessed page, and access LRU chained list, the more accessed page is inserted in mixing main memory by invoking page insertion algorithm.
2. as claimed in claim 1 a kind of based on embedded system mixing main memory page management method, it is characterized in that, the type of accessed page-hit in described step (2), comprise the page accessed recently and the often accessed page, be stored in respectively in CLOCK chained list T1 and CLOCK chained list T2.
3. as claimed in claim 2 a kind of based on embedded system mixing main memory page management method, it is characterized in that, the flag in described step (2) comprises: access position, write position and suggestion position.
4. as claimed in claim 3 a kind of based on embedded system mixing main memory page management method, it is characterized in that, the deterministic process of the described often accessed page is:
If become 1 from the access position of the page, again conducted interviews to this page before becoming 0, then this page is often the accessed page.
5. as claimed in claim 3 a kind of based on embedded system mixing main memory page management method, it is characterized in that, the deterministic process of the described page accessed is recently:
If the access position of some pages is 1, then this page is the page accessed recently.
6. as claimed in claim 3 a kind of based on embedded system mixing main memory page management method, it is characterized in that, when the type of accessed page-hit is the accessed recently page being positioned at CLOCK chained list T1, whether the access position judging the accessed page is 1, if the access position of the accessed page is 1, then by this page migration to the tail end of chained list T2, page access terminates;
If the access position of the accessed page is 0, then judge the request whether write operation of the accessed page, if so, then by the accessed page write position and access position is all set to 1 and page access terminates; Otherwise only the access position of the accessed page is set to 1, page access terminates.
7. as claimed in claim 3 a kind of based on embedded system mixing main memory page management method, it is characterized in that, when the type of accessed page-hit is the often accessed page being positioned at CLOCK chained list T2, judge the request whether write operation of the accessed page, if the request of the accessed page is not write operation, then only the access position of the accessed page is set to 1, page access terminates;
If the request of the accessed page is write operation, then what judge the accessed page writes position and accesses position whether be all set to 1, if not then write position and the access position of the accessed page are all set to 1, and page access terminates; If writing position and accessing position of accession page has been all 1, then judge whether the accessed page is stored in PCM, if the accessed page is stored in PCM, then by accessed page migration in DARM, otherwise the suggestion position of the accessed page is set to 1, and page access terminates.
8. as claimed in claim 3 a kind of based on embedded system mixing main memory page management method, it is characterized in that the LRU chained list in described step (3) comprises chained list B1 and chained list B2; Chained list B1 is used for depositing the page shifting out internal memory from chained list T1; Chained list B2 is used for depositing the page shifting out internal memory from chained list T2.
9. as claimed in claim 8 a kind of based on embedded system mixing main memory page management method, it is characterized in that, the process of described step (3) comprising:
Step (3.1): as accessed page-hit chained list B1, then the target capacity size distributing to chained list T1 adds 1, the more accessed page is inserted in mixing main memory by invoking page insertion algorithm, and be linked in T1, access terminates;
Step (3.2): as accessed page-hit chained list B2, then the target capacity size distributing to chained list T1 subtracts 1, the more accessed page is inserted in mixing main memory by invoking page insertion algorithm, and be linked in T2, access terminates;
Step (3.3): when the accessed page does not hit in chained list B1 and chained list B2, then the amount of capacity of chained list T1 is constant, and then the accessed page is inserted in mixing main memory by invoking page insertion algorithm, and be linked in T1, access terminates.
10. as claimed in claim 9 a kind of based on embedded system mixing main memory page management method, it is characterized in that, the chained list T1 in described step (3), chained list T2, chained list B1 and chained list B2 size meets following condition:
|T1|+|T2|≤S (1)
|T1|+|B1|≤S (2)
|T2|+|B2|≤2S (3)
0≤|T1|+|T2|+|B1|+|B2|≤2S (4)
| T 1 | + | T 2 | + | B 1 | + | B 2 | ≥ S ⇔ | T 1 | + | T 2 | = S - - - ( 6 )
Wherein, the value of S is total size of mixing main memory, and in units of page, each page of size is 4KB; | T1|, | T2|, | B1| and | B2| represents chained list T1 respectively, chained list T2, the amount of capacity of chained list B1 and chained list B2.
CN201510315621.3A 2015-06-10 2015-06-10 The page management method hosted is mixed based on embedded system Active CN104899154B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510315621.3A CN104899154B (en) 2015-06-10 2015-06-10 The page management method hosted is mixed based on embedded system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510315621.3A CN104899154B (en) 2015-06-10 2015-06-10 The page management method hosted is mixed based on embedded system

Publications (2)

Publication Number Publication Date
CN104899154A true CN104899154A (en) 2015-09-09
CN104899154B CN104899154B (en) 2017-08-29

Family

ID=54031829

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510315621.3A Active CN104899154B (en) 2015-06-10 2015-06-10 The page management method hosted is mixed based on embedded system

Country Status (1)

Country Link
CN (1) CN104899154B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105677756A (en) * 2015-12-28 2016-06-15 曙光信息产业股份有限公司 Method and apparatus for effectively using cache in file system
CN106168928A (en) * 2016-07-06 2016-11-30 上海新储集成电路有限公司 A kind of solution mixes the probabilistic method of internal memory read latency
CN106909323A (en) * 2017-03-02 2017-06-30 山东大学 The caching of page method of framework is hosted suitable for DRAM/PRAM mixing and mixing hosts architecture system
CN107193646A (en) * 2017-05-24 2017-09-22 中国人民解放军理工大学 A kind of high-efficiency dynamic paging method that framework is hosted based on mixing
CN107329807A (en) * 2017-06-29 2017-11-07 北京京东尚科信息技术有限公司 Data delay treating method and apparatus, computer-readable recording medium
CN109558093A (en) * 2018-12-19 2019-04-02 哈尔滨工业大学 A kind of mixing memory pages moving method for image processing type load
CN110532200A (en) * 2019-08-26 2019-12-03 北京大学深圳研究生院 A kind of memory system based on mixing memory architecture
US11762578B2 (en) * 2020-09-29 2023-09-19 International Business Machines Corporation Buffer pool contention optimization

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102495806A (en) * 2011-11-25 2012-06-13 清华大学 Periodic wear balancing method and memory management method of phase change memory
US20130042059A1 (en) * 2011-08-09 2013-02-14 Samsung Electronics Co., Ltd. Page merging for buffer efficiency in hybrid memory systems
CN103019955A (en) * 2011-09-28 2013-04-03 中国科学院上海微系统与信息技术研究所 Memory management method based on application of PCRAM (phase change random access memory) main memory
CN103049397A (en) * 2012-12-20 2013-04-17 中国科学院上海微系统与信息技术研究所 Method and system for internal cache management of solid state disk based on novel memory
CN103914403A (en) * 2014-04-28 2014-07-09 中国科学院微电子研究所 Method and system for recording access situation of hybrid memory
CN104317739A (en) * 2014-10-28 2015-01-28 清华大学 Hybrid memory paging method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130042059A1 (en) * 2011-08-09 2013-02-14 Samsung Electronics Co., Ltd. Page merging for buffer efficiency in hybrid memory systems
CN103019955A (en) * 2011-09-28 2013-04-03 中国科学院上海微系统与信息技术研究所 Memory management method based on application of PCRAM (phase change random access memory) main memory
CN102495806A (en) * 2011-11-25 2012-06-13 清华大学 Periodic wear balancing method and memory management method of phase change memory
CN103049397A (en) * 2012-12-20 2013-04-17 中国科学院上海微系统与信息技术研究所 Method and system for internal cache management of solid state disk based on novel memory
CN103914403A (en) * 2014-04-28 2014-07-09 中国科学院微电子研究所 Method and system for recording access situation of hybrid memory
CN104317739A (en) * 2014-10-28 2015-01-28 清华大学 Hybrid memory paging method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
范玉雷 孟小峰: "《基于相变存储器和闪存的数据库事务恢复模型》", 《计算机学报》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105677756A (en) * 2015-12-28 2016-06-15 曙光信息产业股份有限公司 Method and apparatus for effectively using cache in file system
CN106168928B (en) * 2016-07-06 2020-01-07 上海新储集成电路有限公司 Method for solving uncertainty of read delay of hybrid memory
CN106168928A (en) * 2016-07-06 2016-11-30 上海新储集成电路有限公司 A kind of solution mixes the probabilistic method of internal memory read latency
CN106909323A (en) * 2017-03-02 2017-06-30 山东大学 The caching of page method of framework is hosted suitable for DRAM/PRAM mixing and mixing hosts architecture system
CN106909323B (en) * 2017-03-02 2020-03-10 山东大学 Page caching method suitable for DRAM/PRAM mixed main memory architecture and mixed main memory architecture system
CN107193646A (en) * 2017-05-24 2017-09-22 中国人民解放军理工大学 A kind of high-efficiency dynamic paging method that framework is hosted based on mixing
CN107193646B (en) * 2017-05-24 2020-10-09 中国人民解放军理工大学 High-efficiency dynamic page scheduling method based on mixed main memory architecture
CN107329807B (en) * 2017-06-29 2020-06-30 北京京东尚科信息技术有限公司 Data delay processing method and device, and computer readable storage medium
CN107329807A (en) * 2017-06-29 2017-11-07 北京京东尚科信息技术有限公司 Data delay treating method and apparatus, computer-readable recording medium
CN109558093A (en) * 2018-12-19 2019-04-02 哈尔滨工业大学 A kind of mixing memory pages moving method for image processing type load
CN109558093B (en) * 2018-12-19 2022-04-15 哈尔滨工业大学 Hybrid memory page migration method for image processing type load
CN110532200A (en) * 2019-08-26 2019-12-03 北京大学深圳研究生院 A kind of memory system based on mixing memory architecture
CN110532200B (en) * 2019-08-26 2023-08-01 北京大学深圳研究生院 Memory system based on hybrid memory architecture
US11762578B2 (en) * 2020-09-29 2023-09-19 International Business Machines Corporation Buffer pool contention optimization

Also Published As

Publication number Publication date
CN104899154B (en) 2017-08-29

Similar Documents

Publication Publication Date Title
CN104899154A (en) Page management method based on embedded system mixed main memory
CN103019958B (en) Usage data attribute manages the method for the data in solid-state memory
CN103425600B (en) Address mapping method in a kind of solid-state disk flash translation layer (FTL)
CN102880556B (en) Wear leveling method and system of Nand Flash
US8195971B2 (en) Solid state disk and method of managing power supply thereof and terminal including the same
CN105808156A (en) Method for writing data into solid state drive and solid state drive
CN107391398B (en) Management method and system for flash memory cache region
CN104268095A (en) Memory and data reading/ writing operation method based on memory
CN105242871A (en) Data writing method and apparatus
CN104461397A (en) Solid-state drive and read-write method thereof
CN103092534A (en) Scheduling method and device for internal memory structure
CN104360825B (en) One kind mixing memory system and its management method
CN103164343B (en) Based on the paging of phase transition storage, ECC verification and multidigit forecasting method and structure thereof
CN109164975A (en) A kind of method and solid state hard disk writing data into solid state hard disk
CN105103235A (en) Non-volatile multi-level-cell memory with decoupled bits for higher performance and energy efficiency
CN104699424A (en) Page hot degree based heterogeneous memory management method
CN105389135A (en) Solid-state disk internal cache management method
CN104331252A (en) Isomeric NAND solid state disk structure and data reading management method of isomeric NAND solid state disk structure
CN105607862A (en) Solid state disk capable of combining DRAM (Dynamic Random Access Memory) with MRAM (Magnetic Random Access Memory) and being provided with backup power
CN102520885B (en) Data management system for hybrid hard disk
CN102999441A (en) Fine granularity memory access method
CN106909323B (en) Page caching method suitable for DRAM/PRAM mixed main memory architecture and mixed main memory architecture system
CN104298615B (en) Method for equalizing swap partition loss of memory
CN103885724B (en) Memory system architecture based on phase transition storage and wear-leveling algorithm thereof
CN102981972A (en) Wear-leveling method for phase change memory

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant