CN113010452B - Efficient virtual memory architecture supporting QoS - Google Patents

Efficient virtual memory architecture supporting QoS Download PDF

Info

Publication number
CN113010452B
CN113010452B CN202110285644.XA CN202110285644A CN113010452B CN 113010452 B CN113010452 B CN 113010452B CN 202110285644 A CN202110285644 A CN 202110285644A CN 113010452 B CN113010452 B CN 113010452B
Authority
CN
China
Prior art keywords
virtual
memory
atm
qos
pvm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110285644.XA
Other languages
Chinese (zh)
Other versions
CN113010452A (en
Inventor
柴杰
康一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN202110285644.XA priority Critical patent/CN113010452B/en
Publication of CN113010452A publication Critical patent/CN113010452A/en
Application granted granted Critical
Publication of CN113010452B publication Critical patent/CN113010452B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/0292User address space allocation, e.g. contiguous or non contiguous base addressing using tables or multilevel address translation means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0893Caches characterised by their organisation or structure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1027Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/064Management of blocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0647Migration mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0662Virtualisation aspects
    • G06F3/0665Virtualisation aspects at area level, e.g. provisioning of virtual or logical volumes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0662Virtualisation aspects
    • G06F3/0667Virtualisation aspects at data level, e.g. file, record or object virtualisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a virtual memory architecture which is efficient and supports QoS, comprising: VB (Virtual Block), PVM (Process-VB Metadata), ATM (Address Translation Metadata) and hardware microarchitecture. In the invention, all processes are in the same virtual address space and are composed of VB which are not overlapped with each other, so that the problems of homonym (homonym foreign matter) and synonym (homonym foreign matter) of VIVT Caches (virtual index virtual tag cache) are avoided; the PVM maintains the binding relation between the process and VB, so that the problem that multiple address translations are required in a virtualized environment is solved; according to the ATM and by means of the memory real-time monitoring and data migration unit, the dynamic and efficient management of the hybrid physical memory is realized.

Description

Efficient virtual memory architecture supporting QoS
Technical Field
The present invention relates to the technical field of operating systems and chip microarchitecture, and in particular, to an efficient virtual memory architecture supporting QoS.
Background
With the emergence of emerging storage media and the diversification of application programs, the existing virtual memory cannot meet the requirements, and the defects mainly include two:
1. the access delay is too large. Particularly in virtual environments, multiple address translations are required per access, which significantly increases access latency.
2. QoS is not supported. Because QoS of an application is invisible or unknown to hardware, optimization space of the system is limited, preventing performance and efficiency from being improved.
Therefore, there is a need for an efficient QoS-supporting virtual memory that allows the system to adaptively and dynamically allocate memory resources to maximize system performance and efficiency.
Disclosure of Invention
The invention aims to provide a virtual memory architecture which is efficient and supports QoS, and can realize dynamic and efficient management of physical memory, thereby effectively relieving the technical defects.
The invention aims at realizing the following technical scheme:
an efficient and QoS-supporting virtual memory architecture, comprising: virtual block VB and hardware micro architecture, and also set two types of matched metadata structures PVM and ATM, wherein PVM is the metadata structure of virtual block, and ATM is the metadata structure of address translation, wherein:
all processes are in the same virtual address space, the virtual address space is composed of a plurality of VB which are not overlapped with each other, and each VB is a section of continuous virtual address space and is used for storing codes and data of programs;
each process has one of the PVMs for maintaining ownership of VB by the process;
each VB maintains one ATM, and QoS information of the corresponding VB is stored in the ATM;
the hardware micro-architecture is used for executing address translation and realizing management of the hybrid physical memory in combination with the ATM.
According to the technical scheme provided by the invention, all processes are in the same virtual address space and are composed of VB (virtual basic) which are not overlapped with each other, so that the problems of homonym m and synonym of VIVT Caches are avoided; ownership of corresponding VB by PVM maintenance process eliminates the problem of need of multiple address translations in virtualized environment; according to the ATM and by means of the memory real-time monitoring and data migration unit, the dynamic and efficient management of the physical memory is realized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a virtual memory architecture with high efficiency and QoS support according to an embodiment of the present invention;
FIG. 2 is a diagram of virtual address organization in RISC-V Sv48 mode according to an embodiment of the present invention;
FIG. 3 is a PVM and VIT Entry design drawing of an embodiment of the present invention;
FIG. 4 is a schematic diagram of Range translation of an embodiment of the invention;
FIG. 5 is a Range table design of an embodiment of the present invention;
FIG. 6 is a diagram illustrating a virtual memory hardware architecture according to an embodiment of the present invention;
FIG. 7 is a TLB access flow diagram of an embodiment of the present invention;
FIG. 8 is a Range TLB micro-architecture diagram of an embodiment of the present invention;
FIG. 9 is a QoS-capable memory controller micro-architecture diagram in accordance with an embodiment of the present invention;
FIG. 10 is a flow chart of dynamic migration of memory data according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to fall within the scope of the invention.
An embodiment of the present invention provides a virtual memory architecture that is efficient and supports QoS, as shown in fig. 1, and mainly includes: VB (Virtual Block) and hardware microarchitecture, and two types of matched Metadata structures PVM (Process-VB Metadata, metadata structure of Process Virtual Block) and ATM (Address Translation Metadata, metadata structure of address translation) are also arranged, wherein:
all processes are in the same virtual address space, the virtual address space is composed of a plurality of VB which are not overlapped with each other, and each VB is a section of continuous virtual address space and is used for storing codes and data of programs; each process has a PVM for maintaining the binding relation between the process and VB and controlling the access right of the memory; each VB maintains an ATM, and QoS information of the corresponding VB is stored in the ATM; the hardware micro-architecture is used for executing address translation and realizing management of the hybrid physical memory in combination with ATM.
When accessing the memory, the system firstly passes the verification of PVM, and then adaptively configures hardware resources according to QoS information in ATM and the bandwidth utilization rate of the current memory, so as to improve the performance and efficiency to the maximum extent.
Further, each of the VB's is globally unique and non-overlapping.
Further, the PVM stores related information of VB to which the process belongs and access rights of the process to the VB; wherein, the relevant information of VB includes: globally unique number of VB and capacity of VB.
Further, the ATM includes: VIT (VB Info Table) and Page or Range Table (Page Table or section Table); wherein, qoS information of corresponding VB is stored in VIT.
Further, the hardware microarchitecture includes: PVMCache (caching of metadata for process virtual blocks) and QoS-enabled memory controllers; wherein the QoS-capable memory controller is configured to perform address translation and memory management, comprising: ATM Cache (metadata buffer for address translation), memory real-time monitoring unit and data migration unit. When the virtual memory is accessed, qoS information of VB is obtained through the ATM Cache, the access request is processed by combining the data migration unit, and the data is dynamically allocated and migrated by combining the information such as the memory bandwidth utilization rate and the like obtained by the memory real-time monitoring unit.
In the scheme of the embodiment of the invention, all processes are in the same virtual address space and are composed of VB (virtual basic address) which are not overlapped with each other, so that the problems of homonym (alien foreign matter) and synonym (alien foreign matter) of VIVT (Virtually Indexed, virtual cached) Caches are avoided; the binding of the process and VB is maintained by the PVM, thereby eliminating the need for multiple address translations in a virtualized environment. In addition, the access authority and the address translation are separated, namely, the memory access authority is removed from the traditional page table entry and is maintained in the PVM; qoS information of Page or Range tables, and VB is maintained in ATM, and address translation is implemented in memory controller. In order to improve the hit rate of the TLB (translation look-aside buffer), a Range table and a Range TLB are added on the basis of the Page table. Finally, in order to improve timeliness of dynamic management of the memory, PVM Cache and ATM Cache are introduced into a hardware micro-architecture, wherein the ATM Cache comprises two parts of VIT Cache and TLB.
For ease of understanding, the following details are provided for each part of the virtual memory architecture and the working principle.
As shown in FIG. 1, non-overlapping VB's (e.g., VB 1-VB 4 in FIG. 1) form a virtual address space (Single Virtual Address Space) in which all processes reside. Each process corresponds to a PVM (e.g., P1, P2, …, pn in fig. 1), and the PVM stores information about VB to which the process belongs and access rights of the process to the VB. The VB resources are managed with Ownership (Ownership), i.e. the process has Ownership (Ownership) of the corresponding VB, and Ownership of VB can be transferred or shared between processes. In addition, the operating system maintains an ATM for each VB, the ATM stores QoS information (such as delay and bandwidth sensitivity) of the VB, and dynamic and efficient management of the hybrid physical memory is realized according to QoS in the ATM and by means of the memory real-time monitoring and data migration unit. It should be noted that, the number of VB shown in FIG. 1 is merely an example and not a limitation, and in practical application, the user can set the number of VB according to the actual requirement; similarly, the number of PVMs varies according to the actual situation.
As shown in FIG. 2, the single virtual address space is 64 bits, in the case of a single machine, RISC-VSv-48 page table mode, in the case of a distributed system or cluster, RISC-VSv-64 page table mode. The capacity of the VB may be any 8 kinds of 4KB, 8KB and 16KB … 128TB according to specific application scenes and requirements, and each VB has a globally Unique number (VB Unique ID, VBUID for short). Because VBUID is globally unique, the possibility of VIVT Caches homonymous foreign objects is eliminated; the design clears the barrier to the practical adoption of VIVT Caches (virtual Caches) because VB does not overlap, which in turn eliminates the possibility of synonyms for VIVT Caches.
As shown in fig. 3, each process has a PVM for storing the VB and access rights to which the process belongs. The PVM entry design is shown in part (a) of FIG. 3. The process and VB can be dynamically bound and unbound, and the principle of binding/unbinding is the same whether the process of the host or the process of the virtual machine. Through the design, only one address translation is needed at most for each access under the virtualized environment, so that the problem that multiple address translations are needed in the traditional system is solved. In addition, the operating system may maintain an ATM for each VB. ATM includes two parts, VIT and Page or Range Table. The VIT entry design is shown in part (b) of fig. 3. In the VIT Entry, the lips (data characteristics) store QoS information (e.g., delay and bandwidth sensitivity) of the VB.
As shown in FIG. 4, the continuous virtual address space [ BASE, LIMIT ] maps to the continuous physical address space [ BASE+OFFSET, LIMIT+OFFSET ], and the memory addresses are aligned in units of page table size. Compared with the Page Table with fine granularity, the Range Table can cover larger memory space, and improves the hit rate of the TLB.
As shown in fig. 5, a 4-level B tree is used to implement Range table, and when the depth of the tree is 3, the upper storage limit of RTE (Range Table Entry) is 124, which can meet the requirements of most applications. The design has low time complexity (logarithmic level) of query operation, good locality of B-tree data and high cache access efficiency.
As shown in fig. 6, after the VB applies successfully to the operating system, PVMCache is accessed with an address composed of index and offset. Wherein index is the index of VB in PVM to which the instruction belongs, and offset is the offset of the instruction in VB. After the PVM verification is successful, a virtual address composed of VBUID and offset is generated, and then L1Cache (first-level Cache), L2 Cache (second-level Cache) and LLC (VIVT Caches are adopted) are accessed accordingly. Because of the VIVT Caches design, address translation is only done when LLC is missing. If LLC is missing, the VIT in the ATM is accessed according to the virtual address, and then the Page or Range table is accessed according to the Ptr2Page or Ptr2Range pointer in the VIT, so that the virtual address is converted into the physical address. Finally, to reduce access delay of PVM and ATM, PVMCache and ATM Cache are introduced in the hardware micro-architecture for caching PVM and ATM respectively. The ATM Cache comprises a VIT Cache and a TLB.
As shown in fig. 7, first the L1 TLB is accessed. If the L1 TLB is missing, the L2 TLB and the Range TLB are accessed in parallel.
As shown in FIG. 8, the Range TLB uses a 32-entry fully associative design, taking up little chip area. The most recently used entry is deposited in MRU (MostRecentlyUsed). If MRU is missing, the page number of the virtual address is compared with [ BASE, LIMIT ], if OFFSET is queried again, the physical address is generated according to the physical frame number PFN and OFFSET.
As shown in fig. 9, the hybrid physical memory in the hardware microarchitecture is composed of MEM0 and MEM1, MEM0 is a low latency, high bandwidth DRAM (e.g., 3D-DRAM), and MEM1 is a low cost, high capacity DDR-DRAM or persistent memory (Persistent Memory). The propes in VIT classifies data into three classes of QoS, priority being delay sensitive (Latency sensitive), bandwidth sensitive (Bandwidth sensitive) and insensitive (Insensitive sensitive) in order from high to low. The data that does not identify QoS requirements is referred to herein as insensitive data. Frequently accessed data is referred to as hot data, less frequently accessed data is referred to as cold data, and hot data has a higher priority than cold data. Therefore, the overall priority is as follows: delay sensitive hot data > delay sensitive cold data > bandwidth sensitive hot data > bandwidth sensitive cold data > insensitive hot data > insensitive cold data. In the data migration unit (Migration controller): the Remaptable records the mapping relation from the old physical address to the new physical address of the memory block in migration, and the Migrationbuffer caches the page data in migration; wait Queue caches read and write requests to access the migrating page. When accessing the memory, the virtual address is translated into a physical address by first accessing the VITCache in the ATMCache. If the physical address is contained in the memory block being migrated, inquiring the Remap table, if the physical address is hit, inquiring the migationbuffer, and if the physical address is missing, placing a read-write request in the waitQueue; if the physical address is not in the memory block in the migration, the request is dispatched to the corresponding queue according to the propes (data feature), and finally the read-write request is processed by the MEM0 and MEM1 controllers. In addition, the Monitoring unit monitors the bandwidth utilization of the memory at regular time, and the processor dynamically allocates and migrates data according to the bandwidth utilization, so as to improve the performance and efficiency to the greatest extent.
As shown in fig. 10, since the memory bandwidth utilization and the delay are positively correlated, the delay can be measured by taking the bandwidth utilization as a monitoring parameter. The physical memory is divided into three states by taking the medium bandwidth utilization rate K1 and the high bandwidth utilization rate K2 as boundaries (specific numerical values of K1 and K2 can be set according to actual conditions or experience): LMUs (Low Memory Utilization, low bandwidth utilization), HMUs (High Memory Utilization, high bandwidth utilization), and congestion. The Monitoring unit monitors the bandwidth utilization rate of the physical memory in real time, and the system adjusts and dynamically migrates data in a self-adaptive manner according to the bandwidth utilization rate so as to improve the performance and efficiency to the maximum extent, wherein the main flow is as follows: monitoring bandwidth utilization rates of MEM0 and MEM1 in real time by a Monitoring unit, and then executing data distribution or data migration sub-flow according to actual conditions: 1) Judging whether the memory space is applied for allocation, if so, judging whether the MEM0 is in an LMU state and has a free space; if so, the space is allocated in MEM0, and if not, the space is allocated in MEM1. 2) Judging whether MEM0 is in an LMU state; if so, the memory block with the highest priority in the MEM1 is migrated to the MEM0, if not, the memory block with the lowest priority in the MEM0 and the memory block with the highest priority in the MEM1 are exchanged, then whether the MEM0 is in the HMU state and the MEM1 is in the LMU state is judged, and if so, the memory block with the lowest priority in the MEM0 is migrated to the MEM1.
The foregoing is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the scope of the present invention should be included in the scope of the present invention. Therefore, the protection scope of the present invention should be subject to the protection scope of the claims.

Claims (5)

1. An efficient QoS-enabled virtual memory architecture, comprising: virtual block VB and hardware micro architecture, and also set two types of matched metadata structures PVM and ATM, wherein PVM is the metadata structure of virtual block, and ATM is the metadata structure of address translation, wherein:
all processes are in the same virtual address space, the virtual address space is composed of a plurality of VB which are not overlapped with each other, and each VB is a section of continuous virtual address space and is used for storing codes and data of programs;
each process has one of the PVMs for maintaining ownership of VB by the process;
each VB maintains one ATM, and QoS information of the corresponding VB is stored in the ATM;
the hardware micro-architecture is used for executing address translation and realizing management of the hybrid physical memory in combination with the ATM.
2. An efficient QoS-enabled virtual memory architecture according to claim 1, wherein each of said VBs has a globally unique number VBUID.
3. The virtual memory architecture of claim 1, wherein the PVM stores information about VB to which the process belongs and access rights of the process to the VB; wherein, the relevant information of VB includes: globally unique number of VB and capacity of VB.
4. An efficient QoS-enabled virtual memory architecture according to claim 1 wherein said ATM comprises: virtual block information Table VIT and Page or Range Table; wherein, qoS information of corresponding VB is stored in VIT.
5. The efficient QoS-enabled virtual memory architecture of claim 1, wherein said hardware microarchitecture comprises: PVMCache and a memory controller supporting QoS; wherein the QoS-capable memory controller is configured to perform address translation, comprising: the system comprises an ATM Cache, a memory real-time monitoring unit and a data migration unit; when the virtual memory is accessed, qoS information of VB is obtained through the ATM Cache, the access request is processed by combining the data migration unit, and the data is dynamically allocated and migrated by combining the bandwidth utilization rate of the memory and the data migration unit, which are obtained by the memory real-time monitoring unit.
CN202110285644.XA 2021-03-17 2021-03-17 Efficient virtual memory architecture supporting QoS Active CN113010452B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110285644.XA CN113010452B (en) 2021-03-17 2021-03-17 Efficient virtual memory architecture supporting QoS

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110285644.XA CN113010452B (en) 2021-03-17 2021-03-17 Efficient virtual memory architecture supporting QoS

Publications (2)

Publication Number Publication Date
CN113010452A CN113010452A (en) 2021-06-22
CN113010452B true CN113010452B (en) 2023-11-28

Family

ID=76409169

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110285644.XA Active CN113010452B (en) 2021-03-17 2021-03-17 Efficient virtual memory architecture supporting QoS

Country Status (1)

Country Link
CN (1) CN113010452B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113703690B (en) * 2021-10-28 2022-02-22 北京微核芯科技有限公司 Processor unit, method for accessing memory, computer mainboard and computer system
CN117827417A (en) * 2022-09-28 2024-04-05 华为技术有限公司 Memory management method and related equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019237791A1 (en) * 2018-06-12 2019-12-19 华为技术有限公司 Virtualized cache implementation method and physical machine
CN110688330A (en) * 2019-09-23 2020-01-14 北京航空航天大学 Virtual memory address translation method based on memory mapping adjacency
CN110869913A (en) * 2017-07-14 2020-03-06 Arm有限公司 Memory system for data processing network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140108701A1 (en) * 2010-07-16 2014-04-17 Memory Technologies Llc Memory protection unit in a virtual processing environment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110869913A (en) * 2017-07-14 2020-03-06 Arm有限公司 Memory system for data processing network
WO2019237791A1 (en) * 2018-06-12 2019-12-19 华为技术有限公司 Virtualized cache implementation method and physical machine
CN110688330A (en) * 2019-09-23 2020-01-14 北京航空航天大学 Virtual memory address translation method based on memory mapping adjacency

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于MIPS架构的内存虚拟化研究;蔡万伟;台运方;刘奇;张戈;;计算机研究与发展(第10期);全文 *
采用分区域管理的软硬件协作高能效末级高速缓存设计;黄涛;王晶;管雪涛;钟祺;王克义;;计算机辅助设计与图形学学报(第11期);全文 *

Also Published As

Publication number Publication date
CN113010452A (en) 2021-06-22

Similar Documents

Publication Publication Date Title
EP2645259B1 (en) Method, device and system for caching data in multi-node system
US10552337B2 (en) Memory management and device
EP3414665B1 (en) Profiling cache replacement
KR101944876B1 (en) File access method and apparatus and storage device
US8095736B2 (en) Methods and systems for dynamic cache partitioning for distributed applications operating on multiprocessor architectures
US7818489B2 (en) Integrating data from symmetric and asymmetric memory
US7783859B2 (en) Processing system implementing variable page size memory organization
CN113010452B (en) Efficient virtual memory architecture supporting QoS
KR101587579B1 (en) Memory balancing method for virtual system
US20230418737A1 (en) System and method for multimodal computer address space provisioning
KR20210158431A (en) A memory management unit (MMU) for accessing the borrowed memory
WO2014051544A2 (en) Improved performance and energy efficiency while using large pages
WO2013023090A2 (en) Systems and methods for a file-level cache
CN110674051A (en) Data storage method and device
CN111183414A (en) Caching method and system based on service level agreement
US20160103766A1 (en) Lookup of a data structure containing a mapping between a virtual address space and a physical address space
US11687359B2 (en) Hybrid memory management apparatus and method for many-to-one virtualization environment
US20220171656A1 (en) Adjustable-precision multidimensional memory entropy sampling for optimizing memory resource allocation
CN101950274A (en) Data access device based on supervisor mode maintenance and problem mode share as well as method thereof
Wang et al. Superpage-Friendly Page Table Design for Hybrid Memory Systems
김현익 RapidSwap: An Efficient Hierarchical Far Memory
Bhattacharjee et al. Advanced VM Hardware-software Co-design
CN118210622A (en) Memory allocation method and computing device
KR20000014803A (en) Page directory sharing method of a main electronic computer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant