CN110795363A - Hot page prediction method and page scheduling method for storage medium - Google Patents

Hot page prediction method and page scheduling method for storage medium Download PDF

Info

Publication number
CN110795363A
CN110795363A CN201910791493.8A CN201910791493A CN110795363A CN 110795363 A CN110795363 A CN 110795363A CN 201910791493 A CN201910791493 A CN 201910791493A CN 110795363 A CN110795363 A CN 110795363A
Authority
CN
China
Prior art keywords
page
accessed
weight value
characteristic
storage medium
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910791493.8A
Other languages
Chinese (zh)
Other versions
CN110795363B (en
Inventor
郭怡欣
黄兴
汪小林
罗英伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University Shenzhen Graduate School
Peng Cheng Laboratory
Original Assignee
Peking University Shenzhen Graduate School
Peng Cheng Laboratory
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University Shenzhen Graduate School, Peng Cheng Laboratory filed Critical Peking University Shenzhen Graduate School
Priority to CN201910791493.8A priority Critical patent/CN110795363B/en
Publication of CN110795363A publication Critical patent/CN110795363A/en
Application granted granted Critical
Publication of CN110795363B publication Critical patent/CN110795363B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • G06F12/123Replacement control using replacement algorithms with age lists, e.g. queue, most recently used [MRU] list or least recently used [LRU] list
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1028Power efficiency
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

A hot page prediction method and a page scheduling method of a storage medium are provided, access information of N recently accessed pages in the storage medium is obtained, a weight value of the currently accessed page is obtained according to the access information of the currently accessed page and the access information of the N recently accessed pages, and hot page prediction is carried out on the accessed page according to the weight value of the accessed page, wherein the access information comprises access times and physical addresses. Because the hot page prediction is carried out on the accessed page according to the access times and the physical address in the access information, the efficiency and the accuracy of the hot page prediction are higher.

Description

Hot page prediction method and page scheduling method for storage medium
Technical Field
The invention relates to the technical field of memory access, in particular to a hot page prediction method and a page scheduling method of a storage medium.
Background
A Random Access Memory (RAM), also called a "random access memory", is an internal memory that exchanges data directly with the CPU, also called a main memory. It can be read and written at any time, and is fast, usually used as temporary data storage medium of operating system or other running program. The random access memory is further classified into Static random access memory (Static RAM, SRAM) and Dynamic random access memory (Dynamic RAM, DRAM) according to the operating principle of the memory cell. With the increasing data size and the increasing number of applications with dense memory access, DRAM is becoming unable to meet the demands of applications. The emerging memory devices have the disadvantages of higher storage density, lower price, lower energy consumption, higher delay and limited service life, and cannot be directly used as a substitute for DRAM. For example, a non-volatile memory (NVM) is a memory that can still hold data after power is off, and has the characteristics of non-volatility, access by bytes, high storage density, low energy consumption, and read-write performance close to that of a DRAM, but asymmetric read-write speed and limited life. The hybrid memory is composed of the novel memory device and the DRAM, and the advantages of the novel memory device and the DRAM can be well utilized. One way is to combine DRAM and NVM into a unified address space, with data placed either on the DRAM or on the NVM. Since DRAM and NVM latencies are different, storing more hot data in DRAM can greatly reduce the average access latency of the memory. With the running of the program, the hot data of the application program is not constant, and the cold and hot degree of the data may change at any time, so that the data needs to be migrated according to the change of the heat, and the data with higher heat is always guaranteed to be stored in the DRAM. How to accurately identify and migrate thermal data is an important problem to be solved by a parallel hybrid memory architecture.
The existing hybrid memory page scheduling technology mainly improves the traditional LRU and CLOCK algorithms, and realizes the migration operation of pages in different stages through different data structures by utilizing a locality principle and a read-write request according to the basic principle of storage of a storage medium. The method can be roughly divided into passive migration, active migration and a combined active and passive migration mode. The passive page migration strategy directly writes the request data into the DRAM when the main memory misses, and triggers the migration operation when the DRAM is full, and then migrates the cold page with low access frequency or the page with unclear read-write tendency to the NVM. The passive migration mode can fully utilize the characteristic of high read-write bandwidth of the DRAM, and concentrates the write operation on the DRAM as much as possible to achieve the aim of prolonging the service life of the NVM. However, the passive migration strategy lacks frequent page migration from the NVM to the DRAM, and the reduction degree of the NVM write times is limited due to the insufficiency of the read-write prediction mechanism. The active page migration strategy defines the hot and cold of a page through access frequency and access intervals, selects a proper data structure to develop time locality and space locality, judges the read-write tendency of the page to perform corresponding page migration, and ensures that a DRAM stores the write-tendency page and an NVM stores the read-tendency page. The methods can effectively predict the read-write heat of the page, and execute the migration operation when the page shows the read-write tendency, but a large space overhead is needed to record the read-write access frequency and the local access heat, and the prediction results of the algorithms have large differences. A page scheduling algorithm combining active and passive is provided based on CLOCK, a passive mode is adopted for managing pages in DRAM to be migrated to NVM, and an active mode is adopted for distinguishing frequently written pages in NVM and then migrating to DRAM. The active and passive combined paging algorithm can fully exert the advantages of two storage media, namely DRAM and NVM, and the realization cost is relatively low, but the management mode causes non-uniformity in reading and writing heat judgment and page migration proportion difference. These algorithms can effectively take advantage of DRAM write performance and control the number of write operations to the NVM, but the following problems are common: firstly, the frequent invalid page migration between the mixed memory media cannot be effectively avoided, and unnecessary system overhead is caused; secondly, the prediction effect of the data read-write tendency under the weak locality application scene is poor, and inaccurate migration operation is easy to generate; third, there is no ability to migrate as much read-frequent data to NVM as possible, and the advantages of low NVM read power and static power cannot be further exploited. The weak temporal locality is called weak temporal locality, in terms of time, access is relatively dispersed in a time dimension, and a phenomenon that access is concentrated on a certain block of data within a certain time period is not presented; from a spatial perspective, the property of not presenting a focused access to a certain address range is called weak spatial locality.
Therefore, no matter which migration mode is adopted, the hot page needs to be effectively predicted, an adaptive scheduling strategy is lacked for different application scenes, particularly, the read-write heat of the page cannot be accurately predicted under the condition of weak locality, the problem of frequent page migration is easily caused, and the I/O performance and the energy-saving level of the hybrid memory system cannot be fully exerted. Therefore, designing a reasonable and effective hot page prediction method is particularly important for a hybrid memory page scheduling strategy based on dynamic page sequencing.
Disclosure of Invention
The invention mainly solves the technical problem of the defect of a hot page prediction method based on a hybrid memory architecture in the prior art, and provides a hot page prediction method of a storage medium and a page scheduling method based on the hybrid memory architecture.
According to a first aspect, an embodiment provides a method for hot page prediction of a storage medium, comprising:
acquiring access information of recent N accessed pages in the storage medium, wherein the access information comprises access times and physical addresses;
when a page of the storage medium is accessed, acquiring a weight value of the currently accessed page according to the access information of the currently accessed page and the access information of the recently N accessed pages;
and performing hot page prediction on the accessed page according to the weight value of the accessed page.
Further, when the currently accessed page is not in the recent N accessed pages, adding the currently accessed page into the recent N accessed pages, and setting the access times of the currently accessed page to be 1 so as to update the recent N accessed pages; acquiring a weight value of a currently accessed page according to access information of the currently accessed page;
when the current accessed page is in the recent N accessed pages, adding 1 to the access times of the current accessed page; acquiring a physical address of a currently accessed page; and updating the weight value of the currently accessed page according to the physical address and the access times of the currently accessed page.
When the currently accessed page is not in the recent N accessed pages and the currently accessed page is added into the recent N accessed pages, applying a least recently used algorithm to remove one page from the recent N accessed pages and adding the currently accessed page into the recent N accessed pages;
and recording the access times of the removed page as the historical access times when the page enters the recent N accessed pages next time.
Further, carrying out quantization processing on the access times of the removed page;
recording the result after quantization processing as the historical access times when the removed page enters the recent N accessed pages next time;
and carrying out quantization processing on the access times according to the following rules:
when the number of accesses is 0, the result of quantization is 0;
when the number of accesses is 1 to 8, the result of quantization is 1;
when the number of accesses is 9 to 31, the result of quantization is 2;
when the number of accesses is greater than 31, the result of quantization is 3.
According to a second aspect, an embodiment provides a paging method based on a hybrid memory architecture, where the hybrid memory includes a first storage medium and a second storage medium, and an access latency of the first storage medium is smaller than an access latency of the second storage medium; the method comprises the following steps:
performing hot page prediction on an accessed page in the first storage medium and the second storage medium according to the hot page prediction method of the first aspect;
and placing the accessed page with the high weight value in the recent N accessed pages in a first storage medium.
Further, establishing a corresponding relation between each page of the first storage medium and a plurality of pages of the second storage medium to form a page exchange group;
and storing the page with the highest weight value in each page exchange group in a first storage medium.
According to the hot page prediction method and the page scheduling method of the storage medium in the above embodiments, the access information of the recent N accessed pages in the storage medium is obtained, the weight value of the currently accessed page is obtained according to the access information of the currently accessed page, and hot page prediction is performed on the accessed page according to the weight value of the accessed page, where the access information includes access times and a physical address. Because the hot page prediction is carried out on the accessed page according to the access times and the physical address in the access information, the efficiency and the accuracy of the hot page prediction are higher.
Drawings
FIG. 1 is a block diagram of a hybrid memory architecture based memory system according to an embodiment;
FIG. 2 is a block diagram of a hot page predictor in one embodiment;
FIG. 3 is a block diagram of a hot page prediction unit in one embodiment;
FIG. 4 is a schematic diagram of a first storage medium and a second storage medium grouped together in one embodiment;
FIG. 5 is a flowchart illustrating a hot page prediction method according to one embodiment;
FIG. 6 is a flowchart illustrating a method for obtaining a first feature weight value according to an embodiment;
FIG. 7 is a flowchart illustrating a method for obtaining a second feature weight value according to an embodiment;
FIG. 8 is a flowchart illustrating a method for obtaining a third feature weight value according to an embodiment;
FIG. 9 is a comparison of DRAM access rates compared to the MDM algorithm;
FIG. 10 is a comparison of NVM write counts to the MDM algorithm;
FIG. 11 is a comparison of DRAM access rates under different weighting tables;
FIG. 12 is a flowchart illustrating a method for paging based on a hybrid memory architecture according to another embodiment.
Detailed Description
The present invention will be described in further detail with reference to the following detailed description and accompanying drawings. Wherein like elements in different embodiments are numbered with like associated elements. In the following description, numerous details are set forth in order to provide a better understanding of the present application. However, those skilled in the art will readily recognize that some of the features may be omitted or replaced with other elements, materials, methods in different instances. In some instances, certain operations related to the present application have not been shown or described in detail in order to avoid obscuring the core of the present application from excessive description, and it is not necessary for those skilled in the art to describe these operations in detail, so that they may be fully understood from the description in the specification and the general knowledge in the art.
Furthermore, the features, operations, or characteristics described in the specification may be combined in any suitable manner to form various embodiments. Also, the various steps or actions in the method descriptions may be transposed or transposed in order, as will be apparent to one of ordinary skill in the art. Thus, the various sequences in the specification and drawings are for the purpose of describing certain embodiments only and are not intended to imply a required sequence unless otherwise indicated where such sequence must be followed.
The numbering of the components as such, e.g., "first", "second", etc., is used herein only to distinguish the objects as described, and does not have any sequential or technical meaning. The term "connected" and "coupled" when used in this application, unless otherwise indicated, includes both direct and indirect connections (couplings).
In the prior art, a plurality of migration modes are proposed, but the migration modes only consider a single characteristic in hot page prediction and only identify the hot page through the historical access frequency of the page. However, sometimes a single frequency feature does not reflect the future heat of the data well, and a plurality of features are combined to identify the hot page more accurately. The features reflecting the page's hotness may be different at different stages. At some point, the features reflecting the popularity of different pages may also be different, and using only a single feature may yield erroneous results. For example, consider two memory access sequences:
1)ACCCD…
2)BCC…CCE…
in both cases, page C is not accessed again, and if it is determined based only on the number of accesses (page accessed n times, as a hot page, migration is performed), in both cases (n ═ 3), page C will be identified as a hot page, and migrate to DRAM after being accessed 3 times. In the first case, in reality, C will never be accessed again, but it will incur unnecessary overhead if migrated. If further judgment is aided by their last accessed page, the two cases can be distinguished and the hot page predicted more accurately. In the example above, if the previously accessed page was A, then page C would not be a hot page, and if the previously accessed page was B, then page C would be a hot page. So the access times and historical access times of the page at this time, the address of the preamble access page and the address of the current page to be accessed can be selected. We can record, for each feature, the probability that a page is hot at a different feature value. For example, for the access times feature, when a page is accessed 1,2.. n times, the probability p1, p2... pn that the page is a hot page is recorded. Because the probability that different pages are hot under the same feature is different, it is necessary to record the probability that each page is hot under different values for each feature. But this overhead would be quite large and unacceptable. Therefore, the probability that different pages are hot under different characteristic values is predicted by using a mode used in page reuse and branch prediction in a prediction cache for reference.
In the embodiment of the application, the access information of the recent N accessed pages in the storage medium is obtained, the weight value of the currently accessed page is obtained according to the access information of the currently accessed page, and hot page prediction is performed on the accessed page according to the weight value of the accessed page, wherein the access information comprises access times and physical addresses. Because the hot page prediction is carried out on the accessed pages according to the access times and the physical addresses in the access information of the recent N accessed pages, the efficiency and the accuracy of the hot page prediction are higher.
The first embodiment is as follows:
referring to fig. 1, a schematic structural diagram of a memory system based on a hybrid memory architecture according to an embodiment includes a first storage medium 30, a second storage medium 20, and a memory controller 10. Wherein the access latency of the first storage medium 30 is smaller than the access latency of the second storage medium 20. The memory controller 10 includes a hot page predictor 11, configured to obtain access information of recently N accessed pages in the first storage medium and the second storage medium, and further configured to, when a page is accessed in the first storage medium or the second storage medium, obtain a weight value of a currently accessed page according to the access information of the currently accessed page and the access information of the recently N accessed pages, and perform hot page prediction on the accessed page according to the weight value of the accessed page. The access information comprises access times and physical addresses, and N is a natural number. The memory controller 10 is configured to place the page with the higher weight value in the first storage medium 30. The memory controller 10 further includes a plurality of feature weight tables 12 for recording weight values corresponding to feature values of a feature of any accessed page. Each feature weight table includes a plurality of feature index values and feature weight values corresponding to the index values.
Referring to fig. 2, which is a block diagram illustrating a structure of a hot page predictor in an embodiment, the hot page predictor 11 includes N hot page prediction units, and each hot page prediction unit is configured to record the number of accesses to a recently accessed page and to record a weight value of the recently accessed page. In one embodiment, the hot page prediction unit indexes the recorded page with a physical page number.
Referring to fig. 3, which is a block diagram illustrating a structure of hot page prediction units in an embodiment, taking the first hot page prediction unit 111 as an example, each hot page prediction unit records a number 1111 of accesses to the page, a feature index value 1112, a history number 1113, and a weight value 1114. The memory controller 10 is configured to, when the physical address of the currently accessed page is not recorded in the hot page predictor 11, record the physical address of the currently accessed page in a hot page prediction unit in the hot page predictor 11 by the hot page predictor 11, set the number of accesses in the hot page prediction unit to 1, and the hot page predictor 11 obtains a weight value of the currently accessed page according to the number of accesses and the physical address of the currently accessed page and records the weight value in the hot page prediction unit, that is, records the weight value in a position of the hot page prediction unit; when the physical address of the currently accessed page is recorded in the hot page predictor 11, the hot page predictor 11 adds 1 to the access times of the hot page prediction unit for recording the currently accessed page, and updates the weight value of the hot page prediction unit according to the access times and the physical address of the currently accessed page; when the physical address of the currently accessed page is not recorded in the hot page predictor 11 and the hot page prediction units of the hot page predictor 11 are full, the hot page predictor 11 is also used for applying a least recently used algorithm to remove one page and recording the currently accessed page in the hot page predictor. A Least Recently Used (LRU) algorithm serves for virtual paged storage management and is decided according to usage after paging. Since the future use of each page cannot be predicted, only the "recent past" can be used as the approximation of the "recent future", and therefore, the LRU algorithm eliminates the least recently used page. In one embodiment, the hot page predictor 11 uses a special stack to record the page information of the recent N accessed pages. When the current accessed page is to be recorded in the hot page predictor 11, the current accessed page is pressed to the top of the stack, other pages are moved to the bottom of the stack, and if the hot page prediction unit in the hot page predictor 11 is full, the page at the bottom of the stack is removed. Thus, the top of the stack is always the most recently accessed page, while the bottom of the stack is the least recently accessed page.
In an embodiment, the first storage medium is configured to record the number of accesses of the removed page as a historical number of accesses when the removed page next enters the hot page predictor 11, or perform quantization processing on the number of accesses of the removed page first, and use a result after the quantization processing as the historical number of accesses when the removed page next enters the hot page predictor 11.
In one embodiment, the access times are quantized according to the following rules:
when the number of accesses is 0, the result of quantization is 0;
when the number of accesses is 1 to 8, the result of quantization is 1;
when the number of accesses is 9 to 31, the result of quantization is 2;
when the number of accesses is greater than 31, the result of quantization is 3.
When the physical address of the currently accessed page is not in the hot page predictor 11, the hot page predictor 11 is further configured to read the historical access times of the currently accessed page from the first medium and record the historical access times in a hot page prediction unit of the page, i.e., in a historical access times position of the hot page prediction unit.
In one embodiment, the memory controller 10 further includes a plurality of feature weight tables, i.e., a first feature weight table, a second feature weight table, and/or a third feature weight table. The first feature weight table includes a plurality of first feature index values and first feature weight values corresponding to the index values, and is used for recording a corresponding relationship between the number of times of access to any one of the accessed pages in the first storage medium and the second storage medium and the first feature weight values. The second feature weight table includes a plurality of second feature index values and second feature weight values corresponding to the index values, and is used for recording a correspondence between the historical access times of any accessed page in the first storage medium and the second feature weight values. The third feature weight table includes a plurality of third feature index values and third feature weight values corresponding to the index values, and is used for recording a correspondence between a physical address of a previously accessed page and the third feature weight values. In an embodiment, the physical address of the page previously accessed includes the physical address of the page previously accessed or the physical address of the page previously accessed twice, that is, the physical address of the page currently accessed is used as the third feature of the page next accessed, and/or the physical address of the page previously accessed relative to the page currently accessed is used as the third feature of the page next accessed. Each hot page prediction unit also records a first characteristic index value, a second characteristic index value and/or a third characteristic index value of the accessed page, so as to obtain a weight value of the accessed page according to the first characteristic index value, the second characteristic index value and/or the third characteristic index value of the accessed page. The hot page prediction unit is further configured to record a sum of the first characteristic weight value, the second characteristic weight value and/or the third characteristic weight value as a weight value of the accessed page, and record the sum at a weight value position of the hot page prediction unit. The first feature weight value, the second feature weight value and/or the third feature weight value are obtained according to the first feature index value, the second feature index value and/or the third feature index value of the accessed page, namely, the weight value corresponding to the index value is obtained in the feature weight table through the index value.
In an embodiment, when the physical address of the currently accessed page is recorded in the hot page predictor 11, and the weight value of the currently accessed page is greater than a preset value, the hot page predictor 11 does not update the weight value of the currently accessed page. That is, when the weight value recorded in a hot page prediction unit in the hot page predictor 11 is greater than a preset value, the weight value recorded in the prediction unit is not increased.
In one embodiment, the first storage medium includes an address remapping table for translating physical addresses of accessed pages into real addresses, and the memory controller accesses pages in the first storage medium or the second storage medium according to the real addresses, and further includes an address remapping table cache of the first storage medium.
Referring to fig. 4, which is a schematic diagram illustrating grouping of a first storage medium and a second storage medium in an embodiment, a memory controller is further configured to establish a corresponding relationship between each page of the first storage medium and a plurality of pages of the second storage medium to form a page swap group, and store a highest-heat page in each page swap group in the first storage medium. For example, page 1 of the first storage medium and pages 1 to m of the second storage medium form a first page swap group, and the memory controller stores a page with the largest weight value in the first page swap group in page 1 of the first storage medium.
In one embodiment, the first storage medium is DRAM and the second storage medium is NVM. The memory system of the hybrid memory architecture comprises a cache architecture and a parallel architecture. The DRAM and the NVM form a parallel architecture of a unified address space, so that the capacities of the DRAM and the NVM can be better utilized. So we adopt a parallel architecture, DRAM and NVM constitute a unified address space, and data is placed in either DRAM or NVM. When the memory controller obtains the physical address, the physical address is translated into a real address through an address remapping table RT (remap table), and then data is obtained from the DRAM or the NVM according to the real address. In order to accelerate address translation, an address remapping Table cache RTC (remap Table cache) of the RT is maintained in a memory. Each access will modify the feature values needed to determine the migration according to the access, and the feature weight table used to decide whether to migrate. And each NVM access will decide whether to migrate based on the weight table information. If migration into DRAM is required, the page and the swapped out DRAM page are read into the swap buffer of the memory controller and written back to DRAM and NVM, respectively.
The address remapping has various modes, namely full-associative, group-associative and direct-mapping, the full-associative is the most flexible, any NVM page can be switched to any position of a DRAM, but the remapping cost is the largest, the direct-mapping mode organizes one DRAM page and a plurality of NVM pages into a switching group, the page migration can only be carried out in the group, any NVM page can only be switched with the only DRAM page in the group, the flexibility is poor, but the mapping cost is small. In order to reduce the overhead of metadata, we adopt a direct mapping approach. The DRAM and the NVM form a mixed memory according to the ratio m, each DRAM page and m NVM pages form a swap group, page migration can be only carried out in the swap group, and only one page in the group can be migrated to the DRAM. The whole memory space is divided into n exchange groups, the number of the exchange group is 1 to n, the exchange group number of the page can be obtained by modulo n of the physical address, and after the exchange group is determined, the RT of the group is inquired to obtain the position in the page group. When M is 32, 6 bits are needed to identify the location of each page, so a swap group requires 25B to store the remapping information. The remapping table is placed in the DRAM, if the DRAM is accessed once to obtain the page position and then read the data in each memory access, a large amount of extra accesses are caused, and the performance is greatly reduced. Therefore, a cache RTC of the remapping table is maintained in the memory control, and the DRAM is accessed to obtain the remapping table only when the RTC fails, so that the additional access can be greatly reduced.
The whole memory access process is as follows: and after the physical address reaches the memory controller, converting the physical address into a real address according to a corresponding item in the RTC, and accessing the DRAM or the NVM by using the real address, wherein if the RTC generates miss, the DRAM needs to be accessed to obtain the corresponding real address, and then the entry is inserted into the RTC.
In the embodiment of the application, the hot pages are predicted by using a multi-feature migration algorithm, the weight of each page which is the hot page under different feature values needs to be recorded, and judgment is made by using the sum of the weights. The embodiment of the application maintains a weight table for each feature, wherein the weight table comprises four features of access times, historical access times and two preamble access physical addresses (namely, a physical address of a currently accessed page is used as a third feature of a next accessed page, and/or a physical address of a page accessed last time relative to the currently accessed page is used as a third feature of the next accessed page), so that four weight tables are maintained. The exclusive or result of the characteristic value and the physical address of the currently accessed page is used, the value of taking the modulus of the size of the weight table is used as the index of the weight table, and the position pointed by the index in the weight table is the weight of the page under the characteristic value. And if the page is rejected out of the training set, reducing the weight of the last visit of the page to the features. In order to modify the weight of the last-accessed feature, we need to record the index of the last-accessed feature value in the weight table, so the information recorded by the hot page prediction unit includes the access times (AC), the historical access times (QAC), the weight value (yout) and four feature index values. The AC records the historical access frequency and describes the AC condition of the page in the hot page predictor last time. And when the page is eliminated from the hot page predictor, carrying out quantization processing on the AC value of the eliminated page, wherein the result is a new QAC value. The four feature index values are used to store the index of the weight table corresponding to the four features that last accessed the page. The yout field records the current weight sum.
In order to quickly acquire and update the hot page prediction information, the hot page predictor is arranged in the memory controller, and the space occupied by the hot page prediction unit is as follows: the AC field is a 6-bit saturating counter that records the number of accesses. The QAC only needs 2 bits to record the quantized access times. Since the weight table size corresponding to each feature is 1024 entries, each element of the four feature index value fields is described by 10 bits. Since the access page is analyzed using 4 features in this embodiment, this field is maintained with 40 bytes. Since the absolute value of yout sets an upper limit, at 256, 9 bits are required to store the yout value. If we design the size of the hot page prediction unit to be 1024 entries, the total overhead is about 7 KB. The memory controller maintains a weight table for each feature, and the embodiment employs four features, so that four weight tables need to be maintained in the memory controller. The weight table entries record the weight values so each entry requires 8 bits, each weight table size is 1024, so the total overhead bits are 1 KB. The four weight tables total overhead is 4 KB. The specific access flow is as follows:
when the current address reaches the memory controller, the real address is searched in RTC and TS at the same time, and the real address is searched through RTC and added into the hot page predictor. For each access, updating the value of the weight table through the index recorded in the hot page predictor, simultaneously modifying the characteristic values of a plurality of characteristics, carrying out XOR on the new characteristic value and the page number, carrying out modulus operation on the weight table to obtain the index value of the characteristic value of the access on the weight table, recording the new index into the hot page predictor, and simultaneously calculating the new weight and updating the new weight into the hot page predictor. If the corresponding entry of the page is not found in the hot page predictor, the QAC value of the page is acquired from the DRAM, the QAC value and the new AC value are written into a newly established hot page prediction unit, and a new index and a new weight sum are calculated according to the current characteristic value and recorded. When a new entry is inserted into the hot page predictor, if the hot page predictor is full, an entry of one page needs to be eliminated, the eliminated page is considered to have reduced heat, the weight value in the weight table is reduced according to the index value recorded by the eliminated page, the AC value of the eliminated page is quantized into a new QAC value and written back to the DRAM, and then the entry is deleted. Further, if the accessed page is an NVM page, it needs to be determined whether to migrate. In calculating the weight sum of the page, synchronously calculating the current weight sum of the DRAM page which can be exchanged only by the exchange group, if the weight sum of the NVM page is larger, exchanging, otherwise not exchanging.
Based on the above memory system, please refer to fig. 5, which is a schematic flow chart of a hot page prediction method in an embodiment, and the present application further discloses a hot page prediction method, including:
step one, obtaining access information of a recently accessed page.
And acquiring access information of a recently accessed page in the storage medium, wherein the access information comprises access times and physical addresses. In one embodiment, access information of a page accessed 2 times recently is acquired, namely access information of a page accessed last time relative to a current page accessed and access information of a page accessed last time are acquired. In one embodiment, access information of a page accessed multiple times in the near future is acquired.
And step two, acquiring the access information of the currently accessed page.
When the storage medium has a page to be accessed, the access information of the currently accessed page is obtained.
And step three, acquiring the weight value according to the access information.
And acquiring the weight value of the currently accessed page according to the access information of the currently accessed page and the access information of the recently accessed page. In one embodiment, the weight value of the currently accessed page is obtained according to the access information of the currently accessed page and the access information of the page accessed last time. In one embodiment, the weight value of the currently accessed page is obtained according to the access information of the currently accessed page and the access information of the pages accessed last two times.
When the current accessed page is not in the recent N accessed pages, adding the current accessed page into the recent N accessed pages, setting the number of access times of the current accessed page to be 1, updating the recent N accessed pages, and acquiring the weight value of the current accessed page according to the access information of the current accessed page. In one embodiment, the weight value of the accessed page is the sum of the first characteristic weight value, the second characteristic weight value and/or the third characteristic weight value.
Referring to fig. 6, a flowchart illustrating a method for obtaining a first feature weight value according to an embodiment of the present invention is shown, where the method for obtaining a first feature weight value includes:
and step 11, acquiring the access information of the currently accessed page.
And acquiring the physical address and the access times of the currently accessed page.
And step 12, XOR of the access times and the physical address.
And performing XOR on the access times of the currently accessed page and the physical address of the currently accessed page to obtain an XOR result.
And step 13, taking the modulus of the first characteristic weight table according to the XOR result.
The xor result of step 12 is modulo the total size of the first feature weight table to obtain a first feature index value. The first characteristic weight table comprises a plurality of first characteristic index values and first characteristic weight values corresponding to the index values, and is used for recording the corresponding relation between the access times of any accessed page in the storage medium and the first characteristic weight values.
Step 14, retrieving the weight values in the first feature weight table.
And acquiring a first characteristic weight value corresponding to the first characteristic index value from the first characteristic weight table. The first feature weight value is the weight value of this feature of the page relative to the number of accesses.
Referring to fig. 7, a flowchart illustrating a method for obtaining a second feature weight value according to an embodiment of the present invention is shown, where the method for obtaining a second feature weight value includes:
and step 21, acquiring historical access times of the currently accessed page.
And acquiring the historical access times of the currently accessed page. And the historical access times are the access times when the current access page enters the recent N accessed pages last time, or the value obtained by quantizing the access times when the current access page enters the recent N accessed pages last time is taken as the historical access times of the page. The access times of any accessed page in the storage medium are stored in a preset position of the storage medium, and if the currently accessed page enters the recently N accessed pages for the first time, the historical access times of the page are 0.
Step 22, XOR of the historical access times and the physical address.
And performing exclusive OR on the historical access times of the currently accessed page and the physical address of the currently accessed page to obtain an exclusive OR result.
And step 23, taking the modulus of the second characteristic weight table according to the XOR result.
The result of the xor at step 22 is modulo the total size of the second feature weight table to obtain a second feature index value. The second feature weight table includes a plurality of second feature index values and second feature weight values corresponding to the index values, and is used for recording a corresponding relationship between the historical access times of any accessed page in the storage medium and the second feature weight values.
Step 24, retrieving the weight values in the second feature weight table.
And acquiring a second characteristic weight value corresponding to the second characteristic index value from the second characteristic weight table. The second feature weight value is the weight value of this feature of the page relative to the historical number of accesses.
Referring to fig. 8, a flowchart illustrating a method for obtaining a third feature weight value according to an embodiment of the present invention is shown, where the method for obtaining the third feature weight value includes:
step 31, the physical addresses of the current and previously accessed pages are obtained.
And acquiring the physical address of the currently accessed page and then acquiring the physical address of the previously accessed page.
At step 32, the physical addresses of the current and previous pages being accessed are xored.
And carrying out exclusive OR on the physical address of the previously accessed page and the physical address of the currently accessed page to obtain an exclusive OR result. In one embodiment, the physical address of the previously accessed page includes the physical address of the previously accessed page. In one embodiment, the physical address of the previously accessed page includes the physical address of the page accessed two times before.
Step 33, the xor result modulo the third feature weight table.
The xor result of step 32 is modulo the total size of the third feature weight table to obtain a third feature index value. The third feature weight table includes a plurality of third feature index values and third feature weight values corresponding to the index values, and is used for recording a corresponding relationship between a physical address of a previously accessed page and the third feature weight values.
Step 34, retrieving the weight values in the third feature weight table.
And acquiring a third characteristic weight value corresponding to the third characteristic index value from the third characteristic weight table. The third feature weight value is the weight value for this feature of the page relative to the physical address of the previously accessed page.
When the current accessed page is in the recent N accessed pages, adding 1 to the access times of the current accessed page, and acquiring the physical address of the current accessed page so as to update the weight value of the current accessed page according to the physical address and the access times of the current accessed page. The updating method is to update the first characteristic weight value, the second characteristic weight value and/or the third characteristic weight value of the currently accessed page. The updating method of the first characteristic weight value comprises the step of adding 1 to the first characteristic weight value of the currently accessed page, the updating method of the second characteristic weight value comprises the step of adding 1 to the second characteristic weight value of the currently accessed page, and the updating method of the third characteristic weight value comprises the step of adding 1 to the third characteristic weight value of the currently accessed page. And the updated weight value of the accessed page is the sum of the updated first characteristic weight value, the updated second characteristic weight value and/or the updated third characteristic weight value. Accessed means the last time the corresponding feature was accessed predicts it will be accessed later, so the weight of the last time the feature was accessed needs to be increased. The positions of the weight values of the three features in the weight table are recorded in the feature index positions in the hot page prediction unit, the weight value of the feature on each weight table is found according to the recorded index positions, and the weight is increased according to the rule, namely the possibility that the page is accessed under the feature is increased. After the weight value corresponding to the feature accessed last time is updated, the index value corresponding to the weight value corresponding to the feature accessed this time needs to be recorded, so that the weight value corresponding to the feature accessed this time can be conveniently modified when the page is accessed or removed next time. And for the three characteristics, respectively carrying out exclusive OR by using the page number of the access and the characteristic value, carrying out modulo operation on the size of the weight table by using the exclusive OR result, and obtaining a calculation result, namely the position of the weight value of the access characteristic in the weight table.
When the current accessed page is not in the recent N accessed pages, one page is removed from the recent N accessed pages by applying a least recently used algorithm, the current accessed page is added into the recent N accessed pages, and the access times of the removed page are recorded as the historical access times when the page enters the recent N accessed pages next time. Or carrying out quantization processing on the access times of the removed page, and recording the result after the quantization processing as the historical access times when the removed page enters the recent N accessed pages next time. In one embodiment, the access times of the removed page are quantized according to the following rules:
when the number of accesses is 0, the result of quantization is 0;
when the number of accesses is 1 to 8, the result of quantization is 1;
when the number of accesses is 9 to 31, the result of quantization is 2;
when the number of accesses is greater than 31, the result of quantization is 3.
The page is removed, which indicates that the access heat of the corresponding page is not enough, and the weight needs to be reduced. Similar to accessing, a weight value is found according to a weight table index of the last access characteristic recorded in the hot page prediction unit, and the weight is reduced according to the rule, so that the fact that the last access characteristic of the page possibly indicates that the page heat degree is reduced. After the values of the four weight tables are modified, the entry rejects the page out of the hot page predictor, and the AC value of the current round needs to be quantized and written back to the DRAM.
In an embodiment, when the weight value of the currently accessed page is greater than a preset value, the weight value of the currently accessed page is not updated. When the weight value is increased to a certain threshold value, the weight value can not be increased any more, so that the change of the program access mode can be reflected more quickly. Consider the case: without limitation, when a page is very hot in the first stage, the weight value becomes large over time; when the next stage comes, the page suddenly cools and is rarely accessed again. At this time, it takes a long time to adjust the weight value, and the weight value is continuously reduced, and at this time, since the weight value is too large, the hot page predictor is likely to have misjudgment, and the cooled page is migrated. Therefore, a range needs to be set for the weight value, so that when the access and storage phase of the application is obviously changed, the convergence can be fast.
In the embodiment of the application, a multi-feature-based hybrid storage data migration method is provided, four features are selected for hot page prediction, and a perception machine learning method is applied to fuse multiple features which may reflect data heat for parallel prediction. The perceptron is a two-class linear model with the inputs being the feature vectors of the instances and the outputs being the classes of the instances. In order to predict the result by utilizing a plurality of features, the plurality of features form a feature vector, the dot product result yout of the feature vector and the corresponding weight is the prediction result, if the dot product result exceeds a threshold value, the prediction result is true, otherwise, the training is the process of updating the weight. And if the prediction result is true and the weight value does not exceed the upper limit, increasing the weight value, and if the prediction result is false, reducing the weight value. Trained, the weight values will reflect the probability that the prediction is true under this feature. In hot page prediction, if a hot page is accessed, the result of predicting that the hot page is a hot page is true, the weight value is increased, if the hot page is removed, the result of predicting is false, and the weight value corresponding to the characteristic is reduced. The higher the weight value at this time, the more likely the page having this feature is to be a hot page. In order to reduce the overhead, the prediction result is calculated without using dot products, but the weight table is indexed by using the characteristics, and the indexed weight sum is used as the prediction result to replace the mode. For each NVM access, a determination is made as to whether the page needs to be migrated. Since the characteristic of each access indicates the next access situation, the NVM page is determined whether it is a hot page in the future, and the determination is made only according to the characteristic of the current access. And adding the weight values corresponding to the features accessed this time to obtain a weight sum, and if the weight sum exceeds a threshold value, determining that the hot page is migrated.
Compared with the MDM hot page prediction method in the prior art, the method disclosed by the application considers more characteristics reflecting the page hot degree, and can predict the hot page more accurately.
Please refer to fig. 9 and fig. 10, which are schematic diagrams illustrating a comparison between a DRAM access rate and a NVM write quantity compared with an MDM algorithm, wherein an ordinate of fig. 9 is the DRAM access rate, and an ordinate of fig. 10 is the NVM write quantity, which are respectively illustrated as a comparison between the DRAM access rate and the NVM write quantity compared with the MDM algorithm according to the method disclosed in the embodiment of the present application. The algorithms are realized on gem5 and NVMain, and a benchmark of SPEC2006 is used for testing, so that the method is compared with the MDM migration algorithm with the optimal performance. The difference between the MDM and the mfHMM in DRAM access ratio and NVM write number is compared. It was found that the mfHMM can migrate more hot pages onto the DRAM, increasing the fraction of DRAM accesses while reducing the number of NVM accesses. Referring to FIG. 11, a comparison of DRAM access rates under different weighting tables is shown, wherein the ordinate is the DRAM access rate. It can be seen that increasing the size of the weight table, represented as 1024, 2048 and 4096 respectively, can improve the accuracy of the prediction. Only the cost of less than 12KB is increased, the DRAM access rate is improved by 7.07 percent on average, and the NVM writing is reduced by 20.57 percent on average.
Example two:
referring to fig. 12, a flow chart of another embodiment of a page scheduling method based on a hybrid memory architecture is shown, where the hybrid memory includes a first storage medium and a second storage medium, an access latency of the first storage medium is smaller than an access latency of the second storage medium, and the method includes:
step one, performing hot page prediction on the accessed page.
According to one embodiment, the hot page prediction method performs hot page prediction on pages to be accessed in the first storage medium and the second storage medium.
Step two, scheduling page.
And placing the accessed page with the high weight value in the recent N accessed pages in the first storage medium. In one embodiment, the first storage medium is DRAM and the second storage medium is NVM.
In one embodiment, when an accessed page with a high weight value in the recent N accessed pages is placed in the first storage medium, a corresponding relationship is established between each page of the first storage medium and a plurality of pages of the second storage medium to form a page swap group, and then the page with the highest weight value in each page swap group is stored in the first storage medium.
Those skilled in the art will appreciate that all or part of the functions of the various methods in the above embodiments may be implemented by hardware, or may be implemented by computer programs. When all or part of the functions of the above embodiments are implemented by a computer program, the program may be stored in a computer-readable storage medium, and the storage medium may include: a read only memory, a random access memory, a magnetic disk, an optical disk, a hard disk, etc., and the program is executed by a computer to realize the above functions. For example, the program may be stored in a memory of the device, and when the program in the memory is executed by the processor, all or part of the functions described above may be implemented. In addition, when all or part of the functions in the above embodiments are implemented by a computer program, the program may be stored in a storage medium such as a server, another computer, a magnetic disk, an optical disk, a flash disk, or a removable hard disk, and may be downloaded or copied to a memory of a local device, or may be version-updated in a system of the local device, and when the program in the memory is executed by a processor, all or part of the functions in the above embodiments may be implemented.
The present invention has been described in terms of specific examples, which are provided to aid understanding of the invention and are not intended to be limiting. For a person skilled in the art to which the invention pertains, several simple deductions, modifications or substitutions may be made according to the idea of the invention.

Claims (10)

1. A method for hot page prediction for a storage medium, comprising:
acquiring access information of recent N accessed pages in the storage medium, wherein the access information comprises access times and physical addresses;
when a page of the storage medium is accessed, acquiring a weight value of the currently accessed page according to the access information of the currently accessed page and the access information of the recently N accessed pages;
and performing hot page prediction on the accessed page according to the weight value of the accessed page.
2. The method for hot page prediction according to claim 1, wherein when a page is accessed from a storage medium, obtaining a weight value of the currently accessed page according to the access information of the currently accessed page and the access information of the recently N accessed pages comprises:
when the currently accessed page is not in the recent N accessed pages, adding the currently accessed page into the recent N accessed pages, and setting the access times of the currently accessed page to be 1 so as to update the recent N accessed pages; acquiring a weight value of a currently accessed page according to access information of the currently accessed page;
when the current accessed page is in the recent N accessed pages, adding 1 to the access times of the current accessed page; acquiring a physical address of a currently accessed page; and updating the weight value of the currently accessed page according to the physical address and the access times of the currently accessed page.
3. A hot page prediction method as recited in claim 2, wherein said updating the recent N accessed pages further comprises:
when the currently accessed page is not in the recent N accessed pages, one page is removed from the recent N accessed pages by applying a least recently used algorithm, and the currently accessed page is added into the recent N accessed pages;
and recording the access times of the removed page as the historical access times when the page enters the recent N accessed pages next time.
4. A hot page prediction method as claimed in claim 3 wherein said recording the number of accesses of the culled page as the historical number of accesses of the page next time it entered the recent N accessed pages comprises:
carrying out quantization processing on the access times of the removed page;
recording the result after quantization processing as the historical access times when the removed page enters the recent N accessed pages next time;
and carrying out quantization processing on the access times according to the following rules:
when the number of accesses is 0, the result of quantization is 0;
when the number of accesses is 1 to 8, the result of quantization is 1;
when the number of accesses is 9 to 31, the result of quantization is 2;
when the number of accesses is greater than 31, the result of quantization is 3.
5. The hot page prediction method of claim 4, wherein the weight value of the accessed page is a sum of a first characteristic weight value, a second characteristic weight value and/or a third characteristic weight value;
the method for acquiring the first characteristic weight value comprises the following steps:
acquiring a physical address of a currently accessed page;
performing XOR on the access times of the currently accessed page and the physical address of the currently accessed page to obtain an XOR result;
taking the modulus of the XOR result to the total size of the first feature weight table to obtain a first feature index value;
the first characteristic weight table comprises a plurality of first characteristic index values and first characteristic weight values corresponding to the index values, and is used for recording the corresponding relation between the access times of any accessed page in the storage medium and the first characteristic weight values;
acquiring a first characteristic weight value corresponding to the first characteristic index value from the first characteristic weight table;
the method for acquiring the second characteristic weight value comprises the following steps:
acquiring historical access times of a currently accessed page;
performing XOR on the historical access times of the currently accessed page and the physical address of the currently accessed page to obtain an XOR result;
taking the modulus of the XOR result to the total size of a second feature weight table to obtain a second feature index value;
the second characteristic weight table comprises a plurality of second characteristic index values and second characteristic weight values corresponding to the index values, and is used for recording the corresponding relation between the historical access times of any accessed page in the storage medium and the second characteristic weight values;
acquiring a second characteristic weight value corresponding to the second characteristic index value from the second characteristic weight table;
the method for acquiring the third characteristic weight value comprises the following steps:
acquiring a physical address of a currently accessed page;
acquiring a physical address of a previously accessed page;
performing XOR on the physical address of the previously accessed page and the physical address of the currently accessed page to obtain an XOR result;
taking the modulus of the exclusive or result to the total size of a third feature weight table to obtain a third feature index value;
the third characteristic weight table comprises a plurality of third characteristic index values and third characteristic weight values corresponding to the index values, and is used for recording the corresponding relation between the physical address of the previously accessed page and the third characteristic weight values;
and acquiring a third feature weight value corresponding to the third feature index value from the third feature weight table.
6. A hot page prediction method as claimed in claim 5 in which the physical address of the previously accessed page comprises the physical address of the previously accessed page or the physical addresses of the pages accessed twice previously.
7. The method of claim 5, wherein the updating the weight value of the currently accessed page according to the physical address and the access times of the currently accessed page comprises:
updating a first characteristic weight value, a second characteristic weight value and/or a third characteristic weight value of the currently accessed page;
the method for updating the first characteristic weight value comprises the following steps:
adding 1 to a first characteristic weight value of a currently accessed page;
the method for updating the second characteristic weight value comprises the following steps:
adding 1 to a second characteristic weight value of the currently accessed page;
the method for updating the third feature weight value comprises the following steps:
adding 1 to a third characteristic weight value of the currently accessed page;
and the updated weight value of the accessed page is the sum of the updated first characteristic weight value, the updated second characteristic weight value and/or the updated third characteristic weight value.
8. The hot page prediction method of claim 7, further comprising:
and when the weight value of the currently accessed page is greater than a preset value, not updating the weight value of the currently accessed page.
9. A page scheduling method based on a hybrid memory architecture is characterized in that the hybrid memory comprises a first storage medium and a second storage medium, and the access delay of the first storage medium is smaller than that of the second storage medium; the method comprises the following steps:
hot page prediction of an accessed page in the first and second storage media according to the hot page prediction method of any of claims 1 to 8;
and placing the accessed page with the high weight value in the recent N accessed pages in a first storage medium.
10. The method for scheduling pages as recited in claim 9 wherein said placing in a first storage medium an accessed page of said recent N accessed pages having a high weight value comprises:
establishing a corresponding relation between each page of the first storage medium and a plurality of pages of the second storage medium to form a page exchange group;
and storing the page with the highest weight value in each page exchange group in a first storage medium.
CN201910791493.8A 2019-08-26 2019-08-26 Hot page prediction method and page scheduling method of storage medium Active CN110795363B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910791493.8A CN110795363B (en) 2019-08-26 2019-08-26 Hot page prediction method and page scheduling method of storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910791493.8A CN110795363B (en) 2019-08-26 2019-08-26 Hot page prediction method and page scheduling method of storage medium

Publications (2)

Publication Number Publication Date
CN110795363A true CN110795363A (en) 2020-02-14
CN110795363B CN110795363B (en) 2023-05-23

Family

ID=69427051

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910791493.8A Active CN110795363B (en) 2019-08-26 2019-08-26 Hot page prediction method and page scheduling method of storage medium

Country Status (1)

Country Link
CN (1) CN110795363B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112214302A (en) * 2020-10-30 2021-01-12 中国科学院计算技术研究所 Process scheduling method
CN113282585A (en) * 2021-05-28 2021-08-20 山东浪潮通软信息科技有限公司 Report calculation method, device, equipment and medium
CN114116528A (en) * 2021-11-22 2022-03-01 深圳大学 Memory access address prediction method and device, storage medium and electronic equipment
WO2022057749A1 (en) * 2020-09-21 2022-03-24 华为技术有限公司 Method and apparatus for handling missing memory page abnomality, and device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140258663A1 (en) * 2013-03-05 2014-09-11 Qualcomm Incorporated Method and apparatus for preventing unauthorized access to contents of a register under certain conditions when performing a hardware table walk (hwtw)
CN105659212A (en) * 2013-08-22 2016-06-08 格罗方德股份有限公司 Detection of hot pages for partition hibernation
CN106709068A (en) * 2017-01-22 2017-05-24 郑州云海信息技术有限公司 Hotspot data identification method and device
CN107193646A (en) * 2017-05-24 2017-09-22 中国人民解放军理工大学 A kind of high-efficiency dynamic paging method that framework is hosted based on mixing
US20170277640A1 (en) * 2016-03-22 2017-09-28 Huazhong University Of Science And Technology Dram/nvm hierarchical heterogeneous memory access method and system with software-hardware cooperative management

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140258663A1 (en) * 2013-03-05 2014-09-11 Qualcomm Incorporated Method and apparatus for preventing unauthorized access to contents of a register under certain conditions when performing a hardware table walk (hwtw)
CN105659212A (en) * 2013-08-22 2016-06-08 格罗方德股份有限公司 Detection of hot pages for partition hibernation
US20170277640A1 (en) * 2016-03-22 2017-09-28 Huazhong University Of Science And Technology Dram/nvm hierarchical heterogeneous memory access method and system with software-hardware cooperative management
CN106709068A (en) * 2017-01-22 2017-05-24 郑州云海信息技术有限公司 Hotspot data identification method and device
CN107193646A (en) * 2017-05-24 2017-09-22 中国人民解放军理工大学 A kind of high-efficiency dynamic paging method that framework is hosted based on mixing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈俊熹;沙行勉;诸葛晴凤;陈咸彰;: "混合内存页面管理策略的性能和能耗研究" *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022057749A1 (en) * 2020-09-21 2022-03-24 华为技术有限公司 Method and apparatus for handling missing memory page abnomality, and device and storage medium
CN112214302A (en) * 2020-10-30 2021-01-12 中国科学院计算技术研究所 Process scheduling method
CN112214302B (en) * 2020-10-30 2023-07-21 中国科学院计算技术研究所 Process scheduling method
CN113282585A (en) * 2021-05-28 2021-08-20 山东浪潮通软信息科技有限公司 Report calculation method, device, equipment and medium
CN113282585B (en) * 2021-05-28 2023-12-29 浪潮通用软件有限公司 Report calculation method, device, equipment and medium
CN114116528A (en) * 2021-11-22 2022-03-01 深圳大学 Memory access address prediction method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN110795363B (en) 2023-05-23

Similar Documents

Publication Publication Date Title
CN107193646B (en) High-efficiency dynamic page scheduling method based on mixed main memory architecture
CN110795363B (en) Hot page prediction method and page scheduling method of storage medium
CN110532200B (en) Memory system based on hybrid memory architecture
TWI684099B (en) Profiling cache replacement
CN103885728B (en) A kind of disk buffering system based on solid-state disk
CN105205014B (en) A kind of date storage method and device
US8595463B2 (en) Memory architecture with policy based data storage
US20180275899A1 (en) Hardware based map acceleration using forward and reverse cache tables
CN108804350A (en) A kind of memory pool access method and computer system
CN104503703B (en) The treating method and apparatus of caching
CN110888600B (en) Buffer area management method for NAND flash memory
US20120117297A1 (en) Storage tiering with minimal use of dram memory for header overhead
CN103942161B (en) Redundancy elimination system and method for read-only cache and redundancy elimination method for cache
CN100377117C (en) Method and device for converting virtual address, reading and writing high-speed buffer memory
CN103383666B (en) Improve method and system and the cache access method of cache prefetching data locality
CN110297787A (en) The method, device and equipment of I/O equipment access memory
CN103309815A (en) Method and system for increasing available capacity and service life of solid state disc
CN110347338B (en) Hybrid memory data exchange processing method, system and readable storage medium
Liu et al. FLAP: Flash-aware prefetching for improving SSD-based disk cache
CN112148639A (en) High-efficiency small-capacity cache memory replacement method and system
Deng et al. Herniated hash tables: Exploiting multi-level phase change memory for in-place data expansion
Pratibha et al. Efficient flash translation layer for flash memory
Woo et al. FMMU: a hardware-accelerated flash map management unit for scalable performance of flash-based SSDs
CN111796757A (en) Solid state disk cache region management method and device
TW202001791A (en) Image processing system and memory managing method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant