CN113010452A - Efficient virtual memory architecture supporting QoS - Google Patents
Efficient virtual memory architecture supporting QoS Download PDFInfo
- Publication number
- CN113010452A CN113010452A CN202110285644.XA CN202110285644A CN113010452A CN 113010452 A CN113010452 A CN 113010452A CN 202110285644 A CN202110285644 A CN 202110285644A CN 113010452 A CN113010452 A CN 113010452A
- Authority
- CN
- China
- Prior art keywords
- virtual
- memory
- qos
- atm
- pvm
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 claims abstract description 35
- 230000008569 process Effects 0.000 claims abstract description 34
- 238000013519 translation Methods 0.000 claims abstract description 21
- 238000013508 migration Methods 0.000 claims abstract description 14
- 230000005012 migration Effects 0.000 claims abstract description 14
- 238000012544 monitoring process Methods 0.000 claims abstract description 12
- 241000710181 Potato virus M Species 0.000 description 20
- 238000010901 in-process video microscopy Methods 0.000 description 20
- 229920002432 poly(vinyl methyl ether) polymer Polymers 0.000 description 20
- 230000014616 translation Effects 0.000 description 16
- 238000010586 diagram Methods 0.000 description 8
- 230000000875 corresponding effect Effects 0.000 description 6
- 238000013461 design Methods 0.000 description 5
- 230000007547 defect Effects 0.000 description 2
- 230000002085 persistent effect Effects 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 230000004888 barrier function Effects 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
- G06F12/0292—User address space allocation, e.g. contiguous or non contiguous base addressing using tables or multilevel address translation means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0893—Caches characterised by their organisation or structure
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/10—Address translation
- G06F12/1027—Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0638—Organizing or formatting or addressing of data
- G06F3/064—Management of blocks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/0647—Migration mechanisms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0662—Virtualisation aspects
- G06F3/0665—Virtualisation aspects at area level, e.g. provisioning of virtual or logical volumes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0662—Virtualisation aspects
- G06F3/0667—Virtualisation aspects at data level, e.g. file, record or object virtualisation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The invention discloses a high-efficiency virtual memory architecture supporting QoS, comprising: VB (virtual Block), PVM (Process-VB Metadata), ATM (Address Translation Metadata), and hardware microarchitecture. In the invention, all processes are in the same virtual address space and consist of VB (virtual basic objects) which are not overlapped with each other, thereby avoiding the homonym and synnym problems of VIVT Caches (virtual index virtual marking cache); the PVM maintains the binding relationship between the process and the VB, so that the problem of multiple address translation in a virtualization environment is solved; according to ATM and by means of memory real-time monitoring and data migration unit, dynamic and efficient management of mixed physical memory is realized.
Description
Technical Field
The invention relates to the technical field of operating systems and chip microarchitectures, in particular to a virtual memory architecture which is efficient and supports QoS.
Background
With the emergence of emerging storage media and the diversification of application programs, the existing virtual memory cannot meet the requirements more and more, and the defects are two:
1. the access delay is too large. Especially in a virtual environment, each access requires multiple address translations, which significantly increases the access latency.
2. QoS is not supported. Because the QoS of an application is invisible or unknown to the hardware, it may limit the optimization space of the system, preventing performance and efficiency improvements.
Therefore, there is a need for an efficient and QoS-supporting virtual memory, which allows the system to adaptively and dynamically configure memory resources to maximize system performance and efficiency.
Disclosure of Invention
The invention aims to provide a virtual memory architecture which is efficient and supports QoS (quality of service), can realize dynamic and efficient management of a physical memory, and effectively alleviates the technical defects.
The purpose of the invention is realized by the following technical scheme:
an efficient and QoS-enabled virtual memory architecture, comprising: the virtual block VB and the hardware micro-architecture, and two types of matched metadata structures PVM and ATM are also set, wherein the PVM is the metadata structure of the virtual block, and the ATM is the metadata structure of address translation, wherein:
all processes are in the same virtual address space, the virtual address space is composed of a plurality of non-overlapped VB, each VB is a section of continuous virtual address space and is used for storing codes and data of programs;
each process has one said PVM for maintaining ownership of the VB for the process;
each of said VB's maintaining one of said ATM having stored therein QoS information for a corresponding VB;
the hardware micro-architecture is used for executing address translation and realizing the management of the hybrid physical memory by combining the ATM.
According to the technical scheme provided by the invention, all the processes are in the same virtual address space and consist of VB (virtual basic) which are not overlapped with each other, so that the problems of homonym and synnym of VIVT Caches are avoided; the ownership of the corresponding VB by the PVM maintenance process eliminates the problem that multiple address translations are needed in a virtualization environment; according to ATM and by means of memory real-time monitoring and data migration unit, the dynamic and high-efficiency management of physical memory is realized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an efficient QoS-supporting virtual memory architecture according to an embodiment of the present invention;
FIG. 2 is a diagram showing the virtual address configuration in the RISC-V Sv48 mode according to the embodiment of the present invention;
FIG. 3 is a PVM and VIT Entry layout of an embodiment of the present invention;
FIG. 4 is a schematic diagram of Range transformation according to an embodiment of the present invention;
FIG. 5 is a Range table layout diagram of an embodiment of the present invention;
FIG. 6 is a general diagram of a virtual memory hardware architecture according to an embodiment of the present invention;
FIG. 7 is a TLB access flow diagram of an embodiment of the present invention;
FIG. 8 is a Range TLB micro-architectural diagram of an embodiment of the present invention;
FIG. 9 is a diagram of a memory controller microarchitecture supporting QoS in accordance with an embodiment of the present invention;
FIG. 10 is a flowchart illustrating dynamic memory data migration according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
An embodiment of the present invention provides an efficient and QoS-supporting virtual memory architecture, as shown in fig. 1, which mainly includes: VB (Virtual Block) and hardware micro-architecture, and two types of matched Metadata structures PVM (Process-VB Metadata, Metadata structure of Process Virtual Block) and ATM (Address Translation Metadata, Metadata structure of Address Translation) are also set, wherein:
all processes are in the same virtual address space, the virtual address space is composed of a plurality of non-overlapped VB, each VB is a section of continuous virtual address space and is used for storing codes and data of programs; each process has a PVM for maintaining the binding relationship between the process and the VB and controlling the access authority of the memory; each said VB maintaining an ATM in which QoS information for a corresponding VB is stored; the hardware micro-architecture is used for executing address translation and realizing the management of the mixed physical memory by combining ATM.
When accessing the memory, the system firstly passes the verification of PVM, and then adaptively configures hardware resources according to the QoS information in ATM and the bandwidth utilization rate of the current memory, so as to improve the performance and efficiency to the maximum extent.
Further, each of the VB is globally unique and does not overlap with each other.
Further, the PVM stores the relevant information of the VB to which the process belongs and the access authority of the process to the VB; wherein, the information related to VB includes: a globally unique number of VB and the capacity of VB.
Further, the ATM comprises: VIT (VB Info Table) and Page or Range Table; wherein, the VIT stores the QoS information of the corresponding VB.
Further, the hardware microarchitecture includes: PVMCache (cache of metadata of process virtual blocks) and QoS-enabled memory controller; the memory controller supporting QoS is used for performing address translation and memory management, and includes: ATM Cache (metadata buffer for address translation), memory real-time monitoring unit and data migration unit. When accessing the virtual memory, the ATM Cache obtains the QoS information of VB, and dynamically allocates and migrates data by combining the access request processed by the data migration unit and the information such as the memory bandwidth utilization rate acquired by the memory real-time monitoring unit.
In the above-mentioned solution of the embodiment of the present invention, all processes are in the same virtual address space, and the virtual address space is composed of VB that do not overlap with each other, thereby avoiding the homonym (homonym) and synnym (homonym) problems of VIVT (virtual induced, virtual cached) Caches; the binding relationship between the process and the VB is maintained by the PVM, so that the problem of multiple address translation required in a virtualization environment is solved. In addition, the access right and the address translation are separated, namely the memory access right is removed from the traditional page table entry and is maintained in PVM; the QoS information of Page or Range table and VB is maintained in ATM, and the address translation is realized in memory controller. In order to improve the hit rate of TLB (translation lookaside buffer), a Range table and a Range TLB are added on the basis of the Page table. And finally, in order to improve the timeliness of the dynamic memory management, a PVM Cache and an ATM Cache are introduced into the hardware micro-architecture, wherein the ATM Cache comprises a VIT Cache and a TLB.
For ease of understanding, the following detailed description is directed to various portions of the virtual memory architecture and the principles of operation.
As shown in fig. 1, VB (e.g., VB1 to VB4 in fig. 1) that do not overlap each other forms a Virtual Address Space (Single Virtual Address Space) in which all processes are located. Each process corresponds to a PVM (such as P1, P2, …, Pn in fig. 1), and the PVM stores the related information of the VB to which the process belongs and the access rights of the process to the VB. The VB resources are managed with Ownership (owership), i.e. the process has Ownership (Ownership) of the corresponding VB and Ownership of the VB can be transferred or shared between processes. In addition, the operating system maintains an ATM for each VB, the ATM stores the QoS information (such as delay and bandwidth sensitivity) of the VB, and dynamic and efficient management of the hybrid physical memory is realized by means of the memory real-time monitoring and data migration unit according to the QoS in the ATM. It should be noted that the number of VB shown in fig. 1 is only an example and is not a limitation, and in practical applications, a user may set the number of VB according to actual needs; similarly, the number of PVMs varies depending on the actual situation.
As shown in FIG. 2, the single virtual address space is 64 bits, using RISC-VSv-48 page table mode in case of a single machine, and RISC-VSv-64 page table mode in case of a distributed system or cluster. The VB has a capacity of any 8 of 4KB, 8KB and 16KB … 128TB, and each VB has a globally Unique number (VB Unique ID, VBUID for short) according to specific application scenarios and requirements. Because VBUID is globally unique, the possibility of VIVT Caches (virtual cache) homonyms is eliminated; this design clears the barrier for the actual adoption of VIVT Caches (virtual Caches) since VB does not overlap, which in turn eliminates the possibility of VIVT Caches being synonyms or synonyms.
As shown in fig. 3, each process has a PVM for storing the VB to which the process belongs and the access rights. FIG. 3 (a) shows a PVM table entry layout. The process and the VB can be dynamically bound and unbound, and the principle of binding/unbinding is the same whether it is a process of the host or a process of the virtual machine. Through the design, only one address translation is needed at most in each access under the virtualization environment, so that the problem that multiple address translations are needed in the traditional system is solved. In addition, the operating system would maintain one ATM for each VB. ATM consists of two parts, VIT and Page or Range Table. Fig. 3 (b) shows a VIT table entry layout. In the VIT Entry, the tips (data characteristics) store the QoS information (e.g., delay and bandwidth sensitivity) of the VB.
As shown in fig. 4, the continuous virtual address space [ BASE, LIMIT ] maps to the continuous physical address space [ BASE + OFFSET, LIMIT + OFFSET ], and the memory addresses are aligned in units of page table size. Compared with a Page Table with fine granularity, the Range Table can cover a larger memory space, and the TLB hit rate is improved.
As shown in fig. 5, a 4-level B-tree is used to implement a Range Table (interval Table), and when the depth of the tree is 3, the upper storage limit of rte (Range Table entry) is 124, which can meet the requirements of most applications. The design has the advantages of low time complexity (logarithmic level) of query operation, good B-tree data locality and high cache access efficiency.
As shown in fig. 6, after VB successfully applies to the operating system, PVMCache is accessed using an address composed of index and offset. Where index is the index of the VB to which the instruction belongs in the PVM, and offset is the offset of the instruction within the VB. After the PVM verification is successful, a virtual address composed of VBUID and offset is generated, and then L1Cache (primary Cache), L2 Cache (secondary Cache), and LLC (using VIVT Caches) are accessed accordingly. Because of the VIVT Caches design, address translation is only done if LLC is missing. If LLC is missing, firstly access VIT in ATM according to virtual address, then access Page or Range table according to Ptr2Page or Ptr2Range pointer therein, and convert virtual address into physical address. Finally, in order to reduce the access delay of the PVM and the ATM, the PVMCache and the ATM Cache are introduced into the hardware micro-architecture and are respectively used for caching the PVM and the ATM. The ATM Cache comprises a VIT Cache and a TLB.
As shown in FIG. 7, the L1 TLB is accessed first. If the L1 TLB misses, the L2 TLB and the Range TLB are accessed in parallel.
As shown in FIG. 8, the Range TLB is a 32-entry fully associative design, occupying a small chip area. The most recently used entries are stored in the mru (mostrecentlyused). If MRU is missing, the page number of the virtual address is compared with BASE, LIMIT, if OFFSET is inquired again, and finally the physical address is generated according to the physical frame number PFN and OFFSET.
As shown in fig. 9, the hybrid physical Memory in the hardware micro-architecture is composed of MEM0 and MEM1, MEM0 is a low latency, high bandwidth DRAM (e.g., 3D-DRAM), and MEM1 is a low cost, high capacity DDR-DRAM or Persistent Memory (Persistent Memory). The pross in the VIT divide the data into three types of QoS, and the priority is delay sensitive (Latency sensitive), Bandwidth sensitive (Bandwidth sensitive) and Insensitive (Insensitive sensitive) from high to low. Data that does not identify QoS requirements is referred to herein as insensitive data. Frequently accessed data is referred to as hot data, infrequently accessed data is referred to as cold data, and hot data is prioritized over cold data. Therefore, the overall priority is as follows: delay sensitive hot data > delay sensitive cold data > bandwidth sensitive hot data > bandwidth sensitive cold data > insensitive hot data > insensitive cold data. In a data Migration unit (Migration controller): the remapable records the mapping relationship from the old physical address to the new physical address of the memory block in migration, and the Migrationbuffer caches the page data being migrated; wait Queue caches read and write requests to access a migrating page. When accessing the memory, the VITCache in ATMCache is accessed first, and the virtual address is translated into the physical address. If the physical address is contained in the memory block which is being migrated, inquiring a Remap table, if the physical address is hit, inquiring Migrantibuffer, and if the physical address is missing, putting the read-write request in WaitQueue; if the physical address is not in the memory block in the migration, the request is dispatched to the corresponding queue according to the Props (data characteristics), and finally the MEM0 and the MEM1 controller process the read-write request. In addition, Monitoring unit monitors the bandwidth utilization of memory at regular time, and the processor dynamically allocates and migrates data according to the bandwidth utilization, so as to improve performance and efficiency to the maximum extent.
As shown in fig. 10, since the memory bandwidth utilization rate and the delay are positively correlated, the delay can be measured by using the bandwidth utilization rate as the monitoring parameter. With the medium bandwidth utilization rate K1 and the high bandwidth utilization rate K2 as boundaries (the specific values of K1 and K2 can be set according to practical situations or experience), the physical memory is divided into three states: LMU (Low Memory Utilization), HMU (High Memory Utilization), and congestion. The Monitoring unit monitors the bandwidth utilization rate of the physical memory in real time, and the system adaptively adjusts and dynamically migrates data according to the bandwidth utilization rate so as to improve the performance and efficiency to the maximum extent, and the main flow is as follows: monitoring the bandwidth utilization rate of MEM0 and MEM1 in real time by a Monitoring unit, and then performing data distribution or data migration sub-processes according to actual conditions: 1) judging whether the allocation of the memory space is applied or not, if so, judging whether the MEM0 is in an LMU state and has a free space or not; if so, space is allocated in MEM0, and if not, space is allocated in MEM 1. 2) Judging whether the MEM0 is in an LMU state; if so, the memory block with the highest priority in the MEM1 is migrated to the MEM0, if not, the memory block with the lowest priority in the MEM0 is exchanged with the memory block with the highest priority in the MEM1, and then it is determined whether the MEM0 is in the HMU state and the MEM1 is in the LMU state, and if so, the memory block with the lowest priority in the MEM0 is migrated to the MEM 1.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (5)
1. An efficient and QoS-enabled virtual memory architecture, comprising: the virtual block VB and the hardware micro-architecture, and two types of matched metadata structures PVM and ATM are also set, wherein the PVM is the metadata structure of the virtual block, and the ATM is the metadata structure of address translation, wherein:
all processes are in the same virtual address space, the virtual address space is composed of a plurality of non-overlapped VB, each VB is a section of continuous virtual address space and is used for storing codes and data of programs;
each process has one said PVM for maintaining ownership of the VB for the process;
each of said VB's maintaining one of said ATM having stored therein QoS information for a corresponding VB;
the hardware micro-architecture is used for executing address translation and realizing the management of the hybrid physical memory by combining the ATM.
2. An efficient and QoS-capable virtual memory architecture as defined in claim 1, wherein each VB has a globally unique number VBUID.
3. An efficient and QoS-capable virtual memory architecture as claimed in claim 1, wherein the PVM stores information about VB to which a process belongs and access rights of the process to the VB; wherein, the information related to VB includes: a globally unique number of VB and the capacity of VB.
4. An efficient and QoS-enabled virtual memory architecture according to claim 1, wherein said ATM comprises: virtual block information tables VIT and Page or Range Table; wherein, the VIT stores the QoS information of the corresponding VB.
5. An efficient and QoS-enabled virtual memory architecture according to claim 1, wherein said hardware microarchitecture comprises: PVMCache and a memory controller supporting QoS; the memory controller supporting QoS is configured to perform address translation, and includes: the system comprises an ATM Cache, a memory real-time monitoring unit and a data migration unit; when accessing the virtual memory, the ATM Cache obtains the QoS information of VB, processes the access request by combining the data migration unit, and dynamically allocates and migrates data by combining the bandwidth utilization rate of the memory and the data migration unit, which are obtained by the memory real-time monitoring unit.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110285644.XA CN113010452B (en) | 2021-03-17 | 2021-03-17 | Efficient virtual memory architecture supporting QoS |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110285644.XA CN113010452B (en) | 2021-03-17 | 2021-03-17 | Efficient virtual memory architecture supporting QoS |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113010452A true CN113010452A (en) | 2021-06-22 |
CN113010452B CN113010452B (en) | 2023-11-28 |
Family
ID=76409169
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110285644.XA Active CN113010452B (en) | 2021-03-17 | 2021-03-17 | Efficient virtual memory architecture supporting QoS |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113010452B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113703690A (en) * | 2021-10-28 | 2021-11-26 | 北京微核芯科技有限公司 | Processor unit, method for accessing memory, computer mainboard and computer system |
WO2024067018A1 (en) * | 2022-09-28 | 2024-04-04 | 华为技术有限公司 | Memory management method and related device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140108701A1 (en) * | 2010-07-16 | 2014-04-17 | Memory Technologies Llc | Memory protection unit in a virtual processing environment |
WO2019237791A1 (en) * | 2018-06-12 | 2019-12-19 | 华为技术有限公司 | Virtualized cache implementation method and physical machine |
CN110688330A (en) * | 2019-09-23 | 2020-01-14 | 北京航空航天大学 | Virtual memory address translation method based on memory mapping adjacency |
CN110869913A (en) * | 2017-07-14 | 2020-03-06 | Arm有限公司 | Memory system for data processing network |
-
2021
- 2021-03-17 CN CN202110285644.XA patent/CN113010452B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140108701A1 (en) * | 2010-07-16 | 2014-04-17 | Memory Technologies Llc | Memory protection unit in a virtual processing environment |
CN110869913A (en) * | 2017-07-14 | 2020-03-06 | Arm有限公司 | Memory system for data processing network |
WO2019237791A1 (en) * | 2018-06-12 | 2019-12-19 | 华为技术有限公司 | Virtualized cache implementation method and physical machine |
CN110688330A (en) * | 2019-09-23 | 2020-01-14 | 北京航空航天大学 | Virtual memory address translation method based on memory mapping adjacency |
Non-Patent Citations (2)
Title |
---|
蔡万伟;台运方;刘奇;张戈;: "基于MIPS架构的内存虚拟化研究", 计算机研究与发展, no. 10 * |
黄涛;王晶;管雪涛;钟祺;王克义;: "采用分区域管理的软硬件协作高能效末级高速缓存设计", 计算机辅助设计与图形学学报, no. 11 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113703690A (en) * | 2021-10-28 | 2021-11-26 | 北京微核芯科技有限公司 | Processor unit, method for accessing memory, computer mainboard and computer system |
WO2024067018A1 (en) * | 2022-09-28 | 2024-04-04 | 华为技术有限公司 | Memory management method and related device |
Also Published As
Publication number | Publication date |
---|---|
CN113010452B (en) | 2023-11-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10552337B2 (en) | Memory management and device | |
JP6460497B2 (en) | File access method, file access device, and storage device | |
US8095736B2 (en) | Methods and systems for dynamic cache partitioning for distributed applications operating on multiprocessor architectures | |
EP2645259B1 (en) | Method, device and system for caching data in multi-node system | |
US8412907B1 (en) | System, method and computer program product for application-level cache-mapping awareness and reallocation | |
CN105740164A (en) | Multi-core processor supporting cache consistency, reading and writing methods and apparatuses as well as device | |
EP2472412B1 (en) | Explicitly regioned memory organization in a network element | |
CN105518631B (en) | EMS memory management process, device and system and network-on-chip | |
US10108553B2 (en) | Memory management method and device and memory controller | |
WO2019127135A1 (en) | File page table management technique | |
CN113010452A (en) | Efficient virtual memory architecture supporting QoS | |
US7721047B2 (en) | System, method and computer program product for application-level cache-mapping awareness and reallocation requests | |
US8347064B1 (en) | Memory access techniques in an aperture mapped memory space | |
WO2015161804A1 (en) | Cache partitioning method and device | |
JP2024527054A (en) | Dynamically allocatable physically addressed metadata storage - Patents.com | |
KR101701378B1 (en) | Apparatus and method of virtualization for file sharing with virtual machine | |
US11003591B2 (en) | Arithmetic processor, information processing device and control method of arithmetic processor | |
CN108920254B (en) | Memory allocation method based on fine granularity | |
US20160103766A1 (en) | Lookup of a data structure containing a mapping between a virtual address space and a physical address space | |
US11687359B2 (en) | Hybrid memory management apparatus and method for many-to-one virtualization environment | |
CN101950274A (en) | Data access device based on supervisor mode maintenance and problem mode share as well as method thereof | |
US20210056062A1 (en) | Switch-based inter-device notational data movement system | |
Wang et al. | Superpage-Friendly Page Table Design for Hybrid Memory Systems | |
CN118210622A (en) | Memory allocation method and computing device | |
KR20000014803A (en) | Page directory sharing method of a main electronic computer |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |