WO2018077219A1 - 内存管理方法及系统 - Google Patents

内存管理方法及系统 Download PDF

Info

Publication number
WO2018077219A1
WO2018077219A1 PCT/CN2017/107852 CN2017107852W WO2018077219A1 WO 2018077219 A1 WO2018077219 A1 WO 2018077219A1 CN 2017107852 W CN2017107852 W CN 2017107852W WO 2018077219 A1 WO2018077219 A1 WO 2018077219A1
Authority
WO
WIPO (PCT)
Prior art keywords
address space
user
user process
page directory
page
Prior art date
Application number
PCT/CN2017/107852
Other languages
English (en)
French (fr)
Inventor
李小庆
Original Assignee
深圳创维数字技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳创维数字技术有限公司 filed Critical 深圳创维数字技术有限公司
Publication of WO2018077219A1 publication Critical patent/WO2018077219A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1009Address translation using page tables, e.g. page table structures

Definitions

  • the present disclosure relates to the field of communication technologies, and, for example, to a memory management method and system.
  • MMU Memory Management Unit
  • the memory management unit converts a linear address into a physical address by dividing the linear address into three parts to complete the conversion from a linear address to a physical address.
  • the Page Directory Base Register (PDBR) is used to store the physical address of the page directory.
  • the PDBR contains the base address of the memory page of the page directory.
  • the directory pointer of the page directory base address and linear address in the PDBR is used.
  • the corresponding page table is selected according to the page directory entry, and the page table contains a plurality of page table entries.
  • the physical address corresponding to the linear address is obtained according to the page frame of the paging table entry and the offset of the linear address.
  • the page directory size is set to 4 kb, and each process in the system requires a page directory.
  • the number of processes in the system is generally large, and the page directory will occupy more memory, resulting in wasted memory.
  • the Central Processing Unit CPU performs process switching by switching the page directory in the PDBR register, and modifying the contents of the PDBR may cause cache invalidation and affect system performance.
  • the system software needs to selectively clear the translation lookup cache ((Translation) Lookaside Buffer (TLB) invalid content in the cache, keeping the valid content unchanged, and different processes repeatedly containing the same system address space, may cause the system address space synchronization problem.
  • translation lookup cache (Translation) Lookaside Buffer (TLB) invalid content in the cache
  • the present disclosure provides a memory management method and system, which can solve the problem that the page directory of the process occupies more memory and causes memory waste.
  • the present disclosure provides a memory management method, which can be applied to a terminal device having a memory management unit MMU, the MMU being provided with a first register and a second register, the method comprising:
  • the dividing the linear address space into the system address space and the user address space includes:
  • the 4 GB linear address space is divided into a system address space and a user address space, wherein 0-2 GB is the user address space, and 3-4 GB is the system address space.
  • the method further includes:
  • the page table of the second user process is stored in the TLB.
  • the TLB is provided with a first interface for clearing a page table of the user process; and the clearing the page table of the first user process in the TLB includes:
  • the present disclosure also provides a memory management system that can be applied to a terminal device having a memory management unit MMU, the MMU being provided with a first register and a second register, the system comprising:
  • a dividing module configured to divide the linear address space into a system address space and a user address space, store a page directory of the system address space in a memory, and a page directory of a user address space of at least one user process;
  • a first write module configured to write a physical address of a page directory of the system address space to the first register
  • a second write module configured to: when the first user process is started, write a physical address of a page directory of a user address space corresponding to the first user process to the second register;
  • a page directory obtaining module configured to acquire a page directory of the system address space according to a physical address of a page directory of the system address space in the first register, according to a user of the first user process in the second register
  • the physical address of the page directory of the address space acquires a page directory of the user address space of the first user process
  • a storage module configured to acquire a page table of the first user process according to a page directory of the system address space and a page directory of a user address space of the first user process, and set a page table of the first user process Stored in the translation lookup cache TLB of the MMU;
  • An access module configured to determine, according to a page table of the first user process in the TLB, a physical address of a memory unit accessed by the first user process, so that the first user process is according to the memory unit The physical address accesses the corresponding memory unit.
  • the dividing module is set to:
  • the 4 GB linear address space is divided into a system address space and a user address space, wherein 0-2 GB is the user address space, and 3-4 GB is the system address space.
  • system further includes:
  • Clearing the module after setting to store the page table of the first user process in the TLB of the MMU, when the first user process is switched to the second user process, clearing the a page table of the first user process;
  • a modifying module configured to modify a physical address of a page directory of a user address space of the first user process in the second register to a physical address of a page directory of a user address space of the second user process;
  • the page directory obtaining module is further configured to be based on the second user process in the second register
  • the physical address of the page directory of the user address space acquires a page directory of the user address space of the second user process
  • the storage fetching module is further configured to acquire a page table of the second user process according to a page directory of the system address space and a page directory of a user address space of the second user process; and the second user A page table of the process is stored in the TLB.
  • the TLB is provided with a first interface for clearing a cache of a user address space
  • the emptying module is configured to: clear, by the first interface, a page table of the first user process in the TLB.
  • the present disclosure also provides a computer readable storage medium storing computer executable instructions for performing any of the methods described above.
  • the present disclosure also provides a terminal device including one or more processors, a memory, and one or more programs, the one or more programs being stored in a memory when executed by one or more processors , perform any of the above methods.
  • the present disclosure also provides a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions, when the program instructions are executed by a computer, Having the computer perform any of the methods described above.
  • the memory management method and system provided by the present disclosure can allocate a page directory of a user address space for each user process by adding a second register for storing the physical address of the system address space page directory on the MMU, according to the second register.
  • the physical address stored in the system obtains the system address space page directory, and obtains the page table of each process according to the page directory of the system address space and the page directory of the user address space of each process, thereby reducing the memory occupied by the page directory in the system.
  • FIG. 1 is a schematic diagram of the working principle of an MMU in the related art.
  • FIG. 2 is a schematic flow chart of a memory management method provided by an embodiment.
  • FIG. 3 is a schematic diagram of a workflow of an MMU in a memory management method according to an embodiment.
  • FIG. 4 is a schematic structural diagram of a memory management system according to an embodiment.
  • FIG. 5 is a schematic structural diagram of another memory management system according to an embodiment.
  • FIG. 6 is a schematic structural diagram of hardware of a terminal device according to an embodiment.
  • module means for the purpose of explaining the present disclosure, and “module”, “component” or “unit” may be used in combination.
  • the terminal device may include various forms, for example, the terminal described in the present disclosure may include a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (Personal Digital Assistant), a PAD (Tablet), a PMP (Portable Multimedia Playback) And a mobile terminal such as a navigation device and the like, and a fixed terminal such as a digital TV and a desktop computer.
  • the terminal described in the present disclosure may include a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (Personal Digital Assistant), a PAD (Tablet), a PMP (Portable Multimedia Playback) And a mobile terminal such as a navigation device and the like, and a fixed terminal such as a digital TV and a desktop computer.
  • PDA Personal Digital Assistant
  • PAD Tablet
  • PMP Portable Multimedia Playback
  • the present disclosure can be applied to a terminal device having a memory management unit MMU, which divides a linear address space into a user address space and a system address space; there is only one system address space in the system, and each process configures a user address space.
  • MMU memory management unit
  • the physical address of the process can be determined according to the user address space and the unique system address space, thereby avoiding memory waste caused by storing page directories of multiple system address spaces, and also avoiding Synchronization issues between multiple system address spaces.
  • the cache TLB of the MMU may set a first interface for clearing the cache of the user address space, a second interface for clearing the cache of the system address space, and a third interface for clearing the page cache; when the process is switched, only the process needs to be switched. Clear the cache of the user address space to improve the speed of process switching.
  • FIG. 2 is a flowchart of a memory management method according to an embodiment
  • FIG. 3 is a working flowchart of an MMU in a memory management method according to an embodiment.
  • the method can be applied to a terminal device having a memory management unit MMU, and the MMU can be provided with a first register and a second register, and the method can include the following steps:
  • step 100 the linear address space is divided into a system address space and a user address space, a page directory of the system address space is stored in the memory, and a page directory of the user address space of at least one user process, and the system is The physical address of the page directory of the address space is written to the first register.
  • the linear address space refers to a range of values of the linear address.
  • the range of the linear address can be divided into a range of values of the system address and a range of values of the user address.
  • the page directory of the linear address space of a process can be divided into two page directories, which are a page directory of the user address space and a page directory of the system address space. And set the page directory of the system address space of all processes to the same one, that is, only the page directory of one system space.
  • the linear address of a 32-bit hardware platform can range from 0x00000000-0xFFFFFF, or 4GB, and each user process has a 4GB linear address space.
  • the 4G linear address space can be equally divided, wherein 0-2G is used as the user address space and 3-4G is used as the system address space.
  • the system only needs to maintain a page directory of one system address space and n page directories of n user address spaces.
  • the system usually allocates a 4 kb page directory for each process, which includes a page directory of the user address space and a page directory of the system address space.
  • each process can include a 2 kb user address space page directory and a 2 kb system address space page directory, and the system address space.
  • the page directory is unique. Therefore, only one page directory of the system address space and the page directory of each incoming user address space can be stored.
  • the operating system can use 2kb+2kb ⁇ n. Memory to save the page directory, and related technologies in the operating system usually for each All the page directories of the program are stored, and 4 kb ⁇ n memory is needed to save the page directory.
  • the method provided in this embodiment can save memory of 2 kb ⁇ (n-1) size, thereby reducing memory usage.
  • the physical address of the page directory of the system address space may be written into the first register when the system is booted, and the physical address of the page directory in the first register is not modified during system operation. That is, the first register is only used to store the physical address of the page directory of the system address space, and the system has only the physical address of the unique system address space page directory, and correspondingly, only the unique system is stored in the memory.
  • the page directory of the address space may be written into the first register when the system is booted, and the physical address of the page directory in the first register is not modified during system operation. That is, the first register is only used to store the physical address of the page directory of the system address space, and the system has only the physical address of the unique system address space page directory, and correspondingly, only the unique system is stored in the memory.
  • the page directory of the address space may be written into the first register when the system is booted, and the physical address of the page directory in the first register is not modified during system operation. That is, the first register is only used to
  • step 200 when the first user process is started, the physical address of the page directory of the user address space corresponding to the first user process is written into the second register.
  • the physical address of the page directory of the user address space corresponding to the first user process may be written into the second register.
  • the first user process does not need to write the physical address of the page directory of the system address space into the first register, but directly uses the physical address of the page directory of the system address space stored in the first register as the first user process.
  • the physical address of the page directory of the system address space may be written into the second register.
  • a page directory of the system address space is obtained according to a physical address of a page directory of the system address space in the first register, according to a user address space of a first user process in the second register.
  • the physical address of the page directory obtains the page directory of the user address space of the first user process.
  • the page directory stores the physical address of the page table
  • the MMU can determine the page directory of the system address space according to the physical address stored in the first register, and determine the user address space of the first user process according to the physical address stored in the second register. Page directory.
  • step 400 the page table of the first user process is obtained according to the page directory of the system address space and the page directory of the user address space of the first user process, and the first user process is The page table is stored in the translation lookup cache TLB of the MMU.
  • step 500 determining, according to a page table of the first user process in the TLB, a physical address of a memory unit accessed by the first user process, so that the first user process is according to the memory unit.
  • the physical address accesses the corresponding memory unit.
  • the MMU determines a page table of the first user process according to the page directory of the system address space and the page directory of the first user process, and stores the page table of the first user process in the cache TLB for the first user process. Get the physical address of the memory unit to be accessed by the linear address of the first user process, and access the memory unit.
  • the memory address in the instruction is a logical address, which needs to be converted into a linear address, and then the MMU can be converted into a physical address to access the memory unit corresponding to the memory address.
  • Paging is a memory management mechanism provided by the processor. The system can implement memory management according to the paging mechanism. Paging refers to dividing the memory into multiple units. Each unit is a page. Each page can contain 4kB of address space. To convert a linear address into a physical address, the CPU can provide the linear address of the current process to a physical address. Lookup table, the page table.
  • the linear address can be converted into a physical address by searching the page directory and the page table, wherein each process has its own page directory and page table, and each page in the page directory
  • the Directory Directory Entry (PDE) indicates the physical address of the page table
  • each page table entry (Page Table Enova, PTE) in the page table indicates the physical address of the physical page.
  • the system in this embodiment has only one system address space, and configures a user address space for each process.
  • the page directory and the current process corresponding to the unique system address space are obtained.
  • the page directory corresponding to the user address space thereby determining the page table of the current process, and obtaining the physical address of the memory unit to be accessed by the current process, so that the current process accesses the corresponding memory unit through the physical address, and can store a unique system address in the memory.
  • the page directory of the space avoids the memory waste caused by storing the page directories of multiple system address spaces, and also avoids the synchronization problem of the system address space between multiple processes, and
  • the page table is stored in the TLB, so that when the process is processed, the physical address corresponding to the linear address of the current process can be directly obtained from the TLB, and the page table is not separately accessed, thereby improving system performance.
  • a first interface for clearing a cache of the user address space, a second interface for clearing a cache of the system address space, and a third interface for clearing the page cache may be disposed in the TLB.
  • the cache of the system address space, the cache of the user address space, and the page cache stored in the TLB can be separately cleared.
  • only the cache of the user address space can be cleared, and the cache of the system address space is reserved, which can ensure the synchronization of the system address space between multiple processes, thereby improving the efficiency of process switching.
  • the method may further include: when the first user process is switched to the second user process, clearing the location in the TLB a page table of the first user process; modifying a physical address of a page directory of a user address space of the first user process in the second register to a physical state of a page directory of a user address space of the second user process An address; and a page directory of the user address space of the second user process according to a physical address of a page directory of a user address space of the second user process in the second register; a page directory according to the system address space and The page directory of the user address space of the second user process acquires a page table of the second user process; and stores a page table of the second user process in the TLB.
  • the page directory pointed to by the second register is used; when the system address space is run, the page directory pointed to by the first register is used, and when a process modifies the system address space,
  • the physical address of the page directory of the system address space stored in a register is also changed accordingly.
  • the new process reads the system from the first register.
  • the physical address of the page directory of the address space also changes, so that the modification of the system address space by a process can be consistently presented to other processes through the first register.
  • only the second register needs to be modified.
  • the physical address of the page directory can be used to improve the efficiency of process switching.
  • the linear space is divided into a system address space and a user address space, and a first register for storing the system address space and a second register for storing the user address space are set on the MMU; when the system is started, the system address is set.
  • the physical address of the page directory of the space is stored in the first register.
  • the process runs in the user address space, it uses the page directory pointed to by the second register; when running in the system address space, it uses the page directory pointed to by the first register.
  • the system address space needs to be modified, it can be consistently presented to other processes through the first register, and the synchronization line problem of the system address space of different processes can be solved.
  • An embodiment further provides a memory management system.
  • the system can be applied to a terminal device having a memory management unit MMU, and the MMU can be provided with a first register and a second register, and the system can include:
  • the dividing module 100 is configured to divide the linear address space into a system address space and a user address space, store a page directory of the system address space in a memory, and a page directory of a user address space of at least one user process;
  • the first writing module 200 is configured to write a physical address of a page directory of the system address space into the first register;
  • the second writing module 300 is configured to, when the first user process is started, write the physical address of the page directory of the user address space corresponding to the first user process into the second register;
  • a page directory obtaining module 400 configured to acquire a page directory of the system address space according to a physical address of a page directory of the system address space in the first register, according to the second register Obtaining, by the physical address of the page directory of the user address space of the first user process, a page directory of the user address space of the first user process;
  • the storage module 500 is configured to acquire a page table of the first user process according to a page directory of the system address space and a page directory of a user address space of the first user process, and the page of the first user process
  • the table is stored in the translation lookup cache TLB of the MMU;
  • the access module 600 is configured to determine, according to a page table of the first user process in the TLB, a physical address of a memory unit accessed by the first user process, so that the first user process is according to the memory unit The physical address accesses the corresponding memory unit.
  • the first register is set to a physical address of a page directory storing a system address space
  • the two registers are set to a physical address of a page directory storing a user address space.
  • the dividing module 100 is configured to:
  • the 4 GB linear address space is divided into a system address space and a user address space, wherein 0-2 GB is the user address space, and 3-4 GB is the system address space.
  • the memory management system further includes:
  • the clearing module 700 is configured to: after the page table of the first user process is stored in the TLB of the MMU, when the first user process is switched to the second user process, the location in the TLB is cleared. a page table of the first user process;
  • the modifying module 800 is configured to modify a physical address of a page directory of a user address space of the first user process in the second register to a physical address of a page directory of a user address space of the second user process;
  • the page directory obtaining module 400 is further configured to acquire, according to a physical address of a page directory of a user address space of the second user process in the second register, a page directory of a user address space of the second user process;
  • the storage module 500 is further configured to acquire a page table of the second user process according to a page directory of the system address space and a page directory of a user address space of the second user process; and the second user A page table of the process is stored in the TLB.
  • the TLB may be provided with a first interface for clearing a cache of a user address space, a second interface for clearing a cache of the system address space, and a third interface for clearing the page cache;
  • the clearing module 700 can be configured to: clear, by the first interface, a page table of the first user process in the TLB.
  • the division of modules in the above embodiments is a logical function division. In actual implementation, there may be multiple division manners, for example, multiple units or components may be combined or may be integrated into another system, or some features may be ignored or not implemented. .
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to implement the solution in the present disclosure.
  • FIG. 6 it is a hardware structure diagram of a terminal device provided by this embodiment.
  • the terminal device includes: a processor 610 and a memory 620; and may also include a communication interface. (Communications Interface) 630 and bus 640.
  • Communication Interface Communication Interface
  • the processor 610, the memory 620, and the communication interface 630 can complete communication with each other through the bus 640.
  • Communication interface 630 can be used for information transmission.
  • Processor 610 can invoke logic instructions in memory 620 to perform any of the methods of the above-described embodiments.
  • the memory 620 can include a storage program area and a storage data area, and the storage program area can store an operating system and an application required for at least one function.
  • the storage data area can store data and the like created according to the use of the terminal device.
  • the memory may include, for example, a volatile memory of a random access memory, and may also include a non-volatile memory. For example, at least one disk storage device, flash memory device, or other non-transitory solid state storage device.
  • the logic instructions in memory 620 described above can be implemented in the form of software functional units and sold or used as separate products, the logic instructions can be stored in a computer readable storage medium.
  • the technical solution of the present disclosure may be embodied in the form of a computer software product, which may be stored in a storage medium, and includes a plurality of instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) All or part of the steps of the method described in this embodiment are performed.
  • the storage medium may be a non-transitory storage medium or a transitory storage medium.
  • the non-transitory storage medium may include: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes. medium.
  • All or part of the processes in the foregoing embodiment may be completed by a computer program indicating related hardware, and the program may be stored in a non-transitory computer readable storage medium, and when the program is executed, may include the above The flow of an embodiment of the method.
  • the present disclosure provides a memory management method and system, which can reduce memory occupied by a page directory of a process, avoid memory waste, and provide memory utilization and system switching efficiency.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

一种内存管理方法及系统,包括将线性地址空间划分为系统地址空间以及用户地址空间,在内存中存储所述系统地址空间的页目录,及至少一个用户进程的用户地址空间的页目录,并将所述系统地址空间的页目录的物理地址写入所述第一寄存器(100);当启动第一用户进程时,将所述第一用户进程对应的用户地址空间的页目录的物理地址写入所述第二寄存器(200);根据所述第一寄存器中的所述系统地址空间的页目录的物理地址获取所述系统地址空间的页目录,根据所述第二寄存器中的第一用户进程的用户地址空间的页目录的物理地址获取所述第一用户进程的用户地址空间的页目录(300);根据所述系统地址空间的页目录和所述第一用户进程的用户地址空间的页目录获取所述第一用户进程的页表,并将所述第一用户进程的页表存储于所述MMU的翻译查找缓存TLB中(400);根据所述TLB中的所述第一用户进程的页表,确定所述第一用户进程访问的内存单元的物理地址,以使所述第一用户进程根据所述内存单元的物理地址访问对应的内存单元(500)。

Description

内存管理方法及系统 技术领域
本公开涉及通信技术领域,例如涉及一种内存管理方法及系统。
背景技术
很多终端使用虚拟内存技术实现小物理内存运行大程序。该技术主要利用内存管理单元(Memory Management Unit,MMU)将线性地址映射到物理内存地址,线性地址对应程序代码空间,程序代码实际存储在物理存储器中。
请参见图1,内存管理单元(MMU)把线性地址转换成物理地址的过程为:将线性地址划分为三个部分以便完成线性地址至物理地址的转换。页目录基址寄存器(Page Directory Base Register,PDBR)用于存储页目录的物理地址,PDBR中包含有页目录的内存页的基址,利用PDBR中的页目录基址和线性地址的“目录指针”部分,在页目录内选择页目录条目。根据页目录条目选择对应的分页表,分页表包含有多个分页表条目。根据分页表条目的页帧以及线性地址的偏移量获取线性地址对应的物理地址。
MMU将线性地址转换为物理地址时,将页目录的大小设置为4kb,并且系统中的每个进程需要一个页目录。而系统的进程数量一般较大,页目录会占用较多的内存,造成内存的浪费。当系统进行进程切换时,中央处理器(Central Processing Unit,CPU)是通过切换PDBR寄存器中的页目录来执行进程切换的,而修改PDBR的内容会造成高速缓存失效,影响系统性能。另外,在切换PDBR寄存器中的页目录之后,系统软件需要选择性的清空翻译查找缓存((Translation  Lookaside Buffer,TLB)高速缓存中的无效内容,保持有效内容不变,而不同进程重复包含相同的系统地址空间,可能会造成系统地址空间同步的问题。
发明内容
本公开提供一种内存管理方法及系统,可以解决进程的页目录占用较多的内存,造成内存浪费的问题。
本公开提供一种内存管理方法,可以应用于具有内存管理单元MMU的终端设备,所述MMU设置有第一寄存器和第二寄存器,所述方法包括:
将线性地址空间划分为系统地址空间以及用户地址空间,在内存中存储所述系统地址空间的页目录,及至少一个用户进程的用户地址空间的页目录,并将所述系统地址空间的页目录的物理地址写入所述第一寄存器;
当启动第一用户进程时,将所述第一用户进程对应的用户地址空间的页目录的物理地址写入所述第二寄存器;
根据所述第一寄存器中的所述系统地址空间的页目录的物理地址获取所述系统地址空间的页目录,根据所述第二寄存器中的第一用户进程的用户地址空间的页目录的物理地址获取所述第一用户进程的用户地址空间的页目录;
根据所述系统地址空间的页目录和所述第一用户进程的用户地址空间的页目录获取所述第一用户进程的页表,并将所述第一用户进程的页表存储于所述MMU的翻译查找缓存TLB中;
根据所述TLB中的所述第一用户进程的页表,确定所述第一用户进程访问的内存单元的物理地址,以使所述第一用户进程根据所述内存单元的物理地址访问对应的内存单元。
可选地,所述将线性地址空间划分为系统地址空间以及用户地址空间包括:
将4GB的线性地址空间均分为系统地址空间和用户地址空间,其中,0-2GB为用户地址空间,3-4GB为系统地址空间。
可选地,将所述第一用户进程的页表存储于所述MMU的TLB中之后还包括:
当将所述第一用户进程切换为第二用户进程时,清空所述TLB内的所述第一用户进程的页表;
将所述第二寄存器内的所述第一用户进程的用户地址空间的页目录的物理地址修改为所述第二用户进程的用户地址空间的页目录的物理地址;
根据所述第二寄存器中的第二用户进程的用户地址空间的页目录的物理地址获取所述第二用户进程的用户地址空间的页目录;
根据所述系统地址空间的页目录和所述第二用户进程的用户地址空间的页目录获取所述第二用户进程的页表;
将所述第二用户进程的页表存储于所述TLB中。
可选地,所述TLB设置有用于清空用户进程的页表的第一接口;所述清空所述TLB内的所述第一用户进程的页表包括:
通过所述第一接口清空所述TLB内的第一用户进程的页表。
本公开还提供一种内存管理系统,可以应用于具有内存管理单元MMU的终端设备,所述MMU设置有第一寄存器和第二寄存器,该系统包括:
划分模块,设置为将线性地址空间划分为系统地址空间以及用户地址空间,在内存中存储所述系统地址空间的页目录,及至少一个用户进程的用户地址空间的页目录;
第一写入模块,设置为将所述系统地址空间的页目录的物理地址写入所述第一寄存器;
第二写入模块,设置为当启动第一用户进程时,将所述第一用户进程对应的用户地址空间的页目录的物理地址写入所述第二寄存器;
页目录获取模块,设置为根据所述第一寄存器中的所述系统地址空间的页目录的物理地址获取所述系统地址空间的页目录,根据所述第二寄存器中的第一用户进程的用户地址空间的页目录的物理地址获取所述第一用户进程的用户地址空间的页目录;
存储模块,设置为根据所述系统地址空间的页目录和所述第一用户进程的用户地址空间的页目录获取所述第一用户进程的页表,并将所述第一用户进程的页表存储于所述MMU的翻译查找缓存TLB中;
访问模块,设置为根据所述TLB中的所述第一用户进程的页表,确定所述第一用户进程访问的内存单元的物理地址,以使所述第一用户进程根据所述内存单元的物理地址访问对应的内存单元。
可选地,所述划分模块是设置为:
将4GB的线性地址空间均分为系统地址空间和用户地址空间,其中,0-2GB为用户地址空间,3-4GB为系统地址空间。
可选地,所述系统还包括:
清空模块,设置为在将所述第一用户进程的页表存储于所述MMU的TLB中之后,当将所述第一用户进程切换为第二用户进程时,清空所述TLB内的所述第一用户进程的页表;
修改模块,设置为将所述第二寄存器内的所述第一用户进程的用户地址空间的页目录的物理地址修改为所述第二用户进程的用户地址空间的页目录的物理地址;
所述页目录获取模块,还设置为根据所述第二寄存器中的第二用户进程的 用户地址空间的页目录的物理地址获取所述第二用户进程的用户地址空间的页目录;
所述存储取模块,还设置为根据所述系统地址空间的页目录和所述第二用户进程的用户地址空间的页目录获取所述第二用户进程的页表;以及将所述第二用户进程的页表存储于所述TLB中。
可选地,所述TLB设置有用于清空用户地址空间的高速缓存的第一接口;
所述清空模块是设置为:通过所述第一接口清空所述TLB内的第一用户进程的页表。
本公开还提供一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令用于执行上述任意一种方法。
本公开还提供一种终端设备,该终端设备包括一个或多个处理器、存储器以及一个或多个程序,所述一个或多个程序存储在存储器中,当被一个或多个处理器执行时,执行上述任意一种方法。
本公开还提供了一种计算机程序产品,所述计算机程序产品包括存储在非暂态计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,使所述计算机执行上述任意一种方法。
本公开提供的内存管理方法及系统,通过在MMU上增设用于存储系统地址空间页目录物理地址的第二寄存器,可以实现为每个用户进程仅分配用户地址空间的页目录,根据第二寄存器中存储的物理地址获取系统地址空间页目录,并根据系统地址空间的页目录和每一个进程的用户地址空间的页目录获取每一个进程的页表,减少了系统内页目录所占用的内存,提高内存利用率。
附图说明
图1为相关技术中MMU的工作原理示意图。
图2为一实施例提供的内存管理方法的流程示意图。
图3为一实施例提供的内存管理方法中MMU的工作流程示意图。
图4为一实施例提供的内存管理系统的结构示意图。
图5为一实施例提供的另一种内存管理系统的结构示意图。
图6为一实施例提供的一种终端设备的硬件结构示意图。
具体实施方式
本公开中使用的“模块”、“部件”或“单元”等是为了对本公开进行说明,“模块”、“部件”或“单元”可以混合地使用。
终端设备可以包括多种形式,例如,本公开中描述的终端可以包括移动电话、智能电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)和导航装置等等的移动终端以及诸如数字TV和台式计算机等等的固定终端。
本公开可以应用于具有内存管理单元MMU的终端设备,所述终端设备将线性地址空间划分为用户地址空间和系统地址空间;系统中仅有一个系统地址空间,每个进程配置一个用户地址空间,在将线性地址映射为物理地址时,可以根据用户地址空间和唯一的系统地址空间确定所述进程的物理地址,避免了存储多个系统地址空间的页目录而造成的内存浪费问题,同时也避免了多个系统地址空间之间的同步问题。所述MMU的高速缓存器TLB可以设置清空用户地址空间的高速缓存的第一接口、清空系统地址空间的高速缓存的第二接口以及清空页缓存的第三接口;在进程切换的时候,仅需要清空用户地址空间的缓存,提高进程切换的速度。
请参照图2和图3,图2为一实施例提供的内存管理方法的流程图,图3为一实施例提供的内存管理方法中MMU的工作流程图。该方法可以应用于具有内存管理单元MMU的终端设备,所述MMU可以设置有第一寄存器和第二寄存器,该方法可以包括如下步骤:
在步骤100中,将线性地址空间划分为系统地址空间以及用户地址空间,在内存中存储所述系统地址空间的页目录,及至少一个用户进程的用户地址空间的页目录,并将所述系统地址空间的页目录的物理地址写入所述第一寄存器。
所述线性地址空间是指线性地址的取值范围,在系统启动时,可以将所述线性地址的取值范围划分为系统地址的取值范围以及用户地址的取值范围,。对应的可以将一个进程的线性地址空间的页目录分为2个页目录,分别为用户地址空间的页目录和系统地址空间的页目录。并且将所有进程的系统地址空间的页目录设置为同一个,即仅存储一个系统空间的页目录。
例如,32位的硬件平台的线性地址的取值范围可以是0x00000000-0xFFFFFFFF,即4GB,每一个用户进程都拥有4GB的线性地址空间。在本实施例中,可以将4G的线性地址空间进行均分,其中,0-2G作为用户地址空间,3-4G作为系统地址空间。系统仅需要维护一个系统地址空间的页目录和n个用户地址空间的n个页目录。系统通常为每个进程分配4kb的页目录,这4Kb中包括用户地址空间的页目录和系统地址空间的页目录。将系统进程的页目录划分为用户地址空间的页目录和系统地址空间的页目录后,则每个进程可以包含一个2kb的用户地址空间页目录和一个2kb系统地址空间页目录,而系统地址空间的页目录为唯一的,因此,可以仅存储一个系统地址空间的页目录和每个进的用户地址空间的页目录,在有n个进程的系统中,操作系统可以使用2kb+2kb×n的内存来保存页目录,而相关技术中操作系统通常对每个进 程的所有页目录进行存储,则需要使用4kb×n的内存来保存页目录,本实施例提供的方法可以节约2kb×(n-1)大小的内存,从而减少了内存的占用。
可以是在系统启动时,将系统地址空间的页目录的物理地址写入第一寄存器中,在系统运行过程中,所述第一寄存器内的页目录的物理地址是不会被修改的。也就是说,所述第一寄存器仅用于存储系统地址空间的页目录的物理地址,并且系统中仅具有唯一的系统地址空间页目录的物理地址,相应的,在内存中仅存储唯一的系统地址空间的页目录。
在步骤200中,当启动第一用户进程时,将所述第一用户进程对应的用户地址空间的页目录的物理地址写入所述第二寄存器。
例如,在启动所述第一用户进程之前,系统中没有用户进程运行,当启动第一用户进程时,可以将第一用户进程对应的用户地址空间的页目录的物理地址写入第二寄存器中。所述第一用户进程并不需要将系统地址空间的页目录的物理地址写入第一寄存器,而是直接将第一寄存器内存储的系统地址空间的页目录的物理地址作为第一用户进程的系统地址空间的页目录的物理地址。
在步骤300中,根据所述第一寄存器中的所述系统地址空间的页目录的物理地址获取所述系统地址空间的页目录,根据所述第二寄存器中的第一用户进程的用户地址空间的页目录的物理地址获取所述第一用户进程的用户地址空间的页目录。
其中,页目录中存储有页表的物理地址,MMU可以根据第一寄存器中存储的物理地址确定系统地址空间的页目录,根据第二寄存器中存储的物理地址确定第一用户进程的用户地址空间的页目录。
在步骤400中,根据所述系统地址空间的页目录和所述第一用户进程的用户地址空间的页目录获取所述第一用户进程的页表,并将所述第一用户进程的 页表存储于所述MMU的翻译查找缓存TLB中。
在步骤500中,根据所述TLB中的所述第一用户进程的页表,确定所述第一用户进程访问的内存单元的物理地址,以使所述第一用户进程根据所述内存单元的物理地址访问对应的内存单元。
例如,MMU根据系统地址空间的页目录和第一用户进程的页目录,确定第一用户进程的页表,并将第一用户进程的页表存储高速缓存器TLB中,以供第一用户进程获取第一用户进程的线性地址所要访问的内存单元的物理地址,并访问该内存单元。
通常指令中的内存地址为逻辑地址,需转换成线性地址,再经过MMU转换成物理地址后,才能够访问内存地址对应的内存单元。分页是处理器提供的一种内存管理机制,系统可以根据分页机制,实现内存的管理。分页是指将内存划分为多个单元,每个单元为一页,每页可以包含4kB的地址空间,为将线性地址转换成物理地址,可以为CPU提供当前进程的线性地址转换为物理地址的查找表,即页表(page table)。为了节约页表占用的内存空间,可以将线性地址进行页目录和页表两级查找后转换成物理地址,其中,每个进程都有自己的页目录和页表,页目录中的每个页目录项(Page Directory Entry,PDE)指示页表的物理地址,而页表中的每个页表项(Page Table Enova,PTE)指示物理页的物理地址。
本实施例中的系统中仅有一个系统地址空间,并为每个进程配置一个用户地址空间,在将线性地址映射为物理地址的过程中,获取唯一的系统地址空间对应的页目录和当前进程的用户地址空间对应的页目录,进而确定当前进程的页表,得到当前进程所要访问的内存单元的物理地址,使得当前进程通过物理地址访问相应的内存单元,能够在内存中存储唯一的系统地址空间的页目录, 的多个用户进程的用户地址空间的页目录,避免存储多个系统地址空间的页目录而造成的内存浪费问题,同时也避免了多个进程之间的系统地址空间的同步问题,并且,通过将页表存储于TLB中,使得在处理进程时,能够直接从TLB得到当前进程的线性地址对应的物理地址,不再单独访问页表,可以提高系统性能。
可选地,所述TLB中可以设置有用于清空用户地址空间的高速缓存的第一接口、用于清空系统地址空间的高速缓存的第二接口和用于清空页缓存的第三接口。通过上述三个接口可以分别对TLB中存储的系统地址空间的高速缓存、用户地址空间的高速缓存以及页缓存进行清空。在实际应用中,可以仅清空用户地址空间的高速缓存,而保留系统地址空间的高速缓存,可以保证多个进程间的系统地址空间的同步,从而提升进程切换的效率。
可选地,将所述第一用户进程的页表存储于所述MMU的TLB中之后还可以包括:当将所述第一用户进程切换为第二用户进程时,清空所述TLB内的所述第一用户进程的页表;将所述第二寄存器内的所述第一用户进程的用户地址空间的页目录的物理地址修改为所述第二用户进程的用户地址空间的页目录的物理地址;以及根据所述第二寄存器中的第二用户进程的用户地址空间的页目录的物理地址获取所述第二用户进程的用户地址空间的页目录;根据所述系统地址空间的页目录和所述第二用户进程的用户地址空间的页目录获取所述第二用户进程的页表;以及将所述第二用户进程的页表存储于所述TLB中。
例如,进程在用户地址空间运行时,使用的是第二寄存器指向的页目录;在系统地址空间运行时,使用的是第一寄存器指向的页目录,一个进程对系统地址空间进行修改时,第一寄存器中存储的系统地址空间的页目录的物理地址也相应改变,当系统切换新的进程时,该新的进程从第一寄存器内读取的系统 地址空间的页目录的物理地址也发生改变,使得一个进程对系统地址空间的修改,可以通过第一寄存器一致性地呈现给其他进程,当系统执行进程切换时,只需修改第二寄存器中的页目录的物理地址即可,可以提高进程切换的效率。
本实施例通过将线性空间划分为系统地址空间和用户地址空间,并在MMU上设置用于存储系统地址空间的第一寄存器以及存储用户地址空间的第二寄存器;在系统启动时,将系统地址空间的页目录的物理地址存储在第一寄存器中,在系统启动第一用户进程时,仅需要将所述第一用户进程的用户地址空间的页目录的物理地址写入第二寄存器中。进程在用户地址空间运行时,使用的是第二寄存器指向的页目录;在系统地址空间运行时,使用的是第一寄存器指向的页目录。当需要对系统地址空间进行修改时,可以通过第一寄存器一致性地呈现给其他进程,可以解决不同进程的系统地址空间的同步行问题。
一实施例还提供了一种内存管理系统,如图4所示,所述系统可以应用于具有内存管理单元MMU的终端设备,所述MMU可以设置有第一寄存器和第二寄存器,该系统可以包括:
划分模块100,设置为将线性地址空间划分为系统地址空间以及用户地址空间,在内存中存储所述系统地址空间的页目录,及至少一个用户进程的用户地址空间的页目录;
第一写入模块200,设置为将所述系统地址空间的页目录的物理地址写入所述第一寄存器;
第二写入模块300,设置为当启动第一用户进程时,将所述第一用户进程对应的用户地址空间的页目录的物理地址写入所述第二寄存器;
页目录获取模块400,设置为根据所述第一寄存器中的所述系统地址空间的页目录的物理地址获取所述系统地址空间的页目录,根据所述第二寄存器中的 第一用户进程的用户地址空间的页目录的物理地址获取所述第一用户进程的用户地址空间的页目录;
存储模块500,设置为根据所述系统地址空间的页目录和所述第一用户进程的用户地址空间的页目录获取所述第一用户进程的页表,并将所述第一用户进程的页表存储于所述MMU的翻译查找缓存TLB中;
访问模块600,设置为根据所述TLB中的所述第一用户进程的页表,确定所述第一用户进程访问的内存单元的物理地址,以使所述第一用户进程根据所述内存单元的物理地址访问对应的内存单元。
可选地,所述第一寄存器设置为存储系统地址空间的页目录的物理地址,所述二寄存器设置为存储用户地址空间的页目录的物理地址。
可选地,所述划分模块100是设置为:
将4GB的线性地址空间均分为系统地址空间和用户地址空间,其中,0-2GB为用户地址空间,3-4GB为系统地址空间。
可选地,如图5所示,所述内存管理系统,还包括:
清空模块700,设置为在将所述第一用户进程的页表存储于所述MMU的TLB中之后,当将所述第一用户进程切换为第二用户进程时,清空所述TLB内的所述第一用户进程的页表;
修改模块800,设置为将所述第二寄存器内的所述第一用户进程的用户地址空间的页目录的物理地址修改为所述第二用户进程的用户地址空间的页目录的物理地址;
所述页目录获取模块400,还设置为根据所述第二寄存器中的第二用户进程的用户地址空间的页目录的物理地址获取所述第二用户进程的用户地址空间的页目录;
所述存储模块500,还设置为根据所述系统地址空间的页目录和所述第二用户进程的用户地址空间的页目录获取所述第二用户进程的页表;以及将所述第二用户进程的页表存储于所述TLB中。
可选地,所述TLB可以设置有用于清空用户地址空间的高速缓存的第一接口、用于清空系统地址空间的高速缓存的第二接口和用于清空页缓存的第三接口;
所述清空模块700可以设置为:通过所述第一接口清空所述TLB内的第一用户进程的页表。
上述内存管理系统的多个模块在上述方法中已经详细说明,这里不再一一陈述。
上述实施例中模块的划分,为一种逻辑功能划分,实际实现时可以有多种划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本公开中的方案。
另外,在本公开的多个实施例中的多个功能单元可以集成在一个处理单元中,也可以是多个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。如图6所示,是本实施例提供的一种终端设备的硬件结构示意图,如图6所示,该终端设备包括:处理器(processor)610和存储器(memory)620;还可以包括通信接口(Communications Interface)630和总线640。
其中,处理器610、存储器620和通信接口630可以通过总线640完成相互间的通信。通信接口630可以用于信息传输。处理器610可以调用存储器620中的逻辑指令,以执行上述实施例的任意一种方法。
存储器620可以包括存储程序区和存储数据区,存储程序区可以存储操作系统和至少一个功能所需的应用程序。存储数据区可以存储根据终端设备的使用所创建的数据等。此外,存储器可以包括,例如,随机存取存储器的易失性存储器,还可以包括非易失性存储器。例如至少一个磁盘存储器件、闪存器件或者其他非暂态固态存储器件。
此外,在上述存储器620中的逻辑指令可以通过软件功能单元的形式实现并作为独立的产品销售或使用时,该逻辑指令可以存储在一个计算机可读取存储介质中。本公开的技术方案可以以计算机软件产品的形式体现出来,该计算机软件产品可以存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本实施例所述方法的全部或部分步骤。
存储介质可以是非暂态存储介质,也可以是暂态存储介质。非暂态存储介质可以包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等多种可以存储程序代码的介质。
上述实施例方法中的全部或部分流程,是可以通过计算机程序来指示相关的硬件完成的,该程序可存储于一个非暂态计算机可读存储介质中,该程序被执行时,可包括如上述方法的实施例的流程。
工业实用性
本公开提供一种内存管理方法及系统,可以实现减少进程的页目录所占用的内存,避免内存浪费,提供内存利用率及系统的切换效率。

Claims (9)

  1. 一种内存管理方法,应用于具有内存管理单元MMU的终端设备,所述MMU设置有第一寄存器和第二寄存器,所述方法包括:
    将线性地址空间划分为系统地址空间以及用户地址空间,在内存中存储所述系统地址空间的页目录,及至少一个用户进程的用户地址空间的页目录,并将所述系统地址空间的页目录的物理地址写入所述第一寄存器;
    当启动第一用户进程时,将所述第一用户进程对应的用户地址空间的页目录的物理地址写入所述第二寄存器;
    根据所述第一寄存器中的所述系统地址空间的页目录的物理地址获取所述系统地址空间的页目录,根据所述第二寄存器中的第一用户进程的用户地址空间的页目录的物理地址获取所述第一用户进程的用户地址空间的页目录;
    根据所述系统地址空间的页目录和所述第一用户进程的用户地址空间的页目录获取所述第一用户进程的页表,并将所述第一用户进程的页表存储于所述MMU的翻译查找缓存TLB中;
    根据所述TLB中的所述第一用户进程的页表,确定所述第一用户进程访问的内存单元的物理地址,以使所述第一用户进程根据所述内存单元的物理地址访问对应的内存单元。
  2. 根据权利要求1所述的方法,其中,所述将线性地址空间划分为系统地址空间以及用户地址空间包括:
    将4GB的线性地址空间均分为系统地址空间和用户地址空间,其中,0-2GB为用户地址空间,3-4GB为系统地址空间。
  3. 根据权利要求1所述的方法,其中,将所述第一用户进程的页表存储于所述MMU的TLB中之后还包括:
    当将所述第一用户进程切换为第二用户进程时,清空所述TLB内的所述第 一用户进程的页表;
    将所述第二寄存器内的所述第一用户进程的用户地址空间的页目录的物理地址修改为所述第二用户进程的用户地址空间的页目录的物理地址;
    根据所述第二寄存器中的第二用户进程的用户地址空间的页目录的物理地址获取所述第二用户进程的用户地址空间的页目录;
    根据所述系统地址空间的页目录和所述第二用户进程的用户地址空间的页目录获取所述第二用户进程的页表;
    将所述第二用户进程的页表存储于所述TLB中。
  4. 根据权利要求1或3所述的方法,其中,所述TLB设置有用于清空用户进程的页表的第一接口;
    所述清空所述TLB内的所述第一用户进程的页表包括:
    通过所述第一接口清空所述TLB内的第一用户进程的页表。
  5. 一种内存管理系统,应用于具有内存管理单元MMU的终端设备,所述MMU设置有第一寄存器和第二寄存器,所述系统包括:
    划分模块,设置为将线性地址空间划分为系统地址空间以及用户地址空间,在内存中存储所述系统地址空间的页目录,及至少一个用户进程的用户地址空间的页目录;
    第一写入模块,设置为将所述系统地址空间的页目录的物理地址写入所述第一寄存器;
    第二写入模块,设置为当启动第一用户进程时,将所述第一用户进程对应的用户地址空间的页目录的物理地址写入所述第二寄存器;
    页目录获取模块,设置为根据所述第一寄存器中的所述系统地址空间的页目录的物理地址获取所述系统地址空间的页目录,根据所述第二寄存器中的第 一用户进程的用户地址空间的页目录的物理地址获取所述第一用户进程的用户地址空间的页目录;
    存储模块,设置为根据所述系统地址空间的页目录和所述第一用户进程的用户地址空间的页目录获取所述第一用户进程的页表,并将所述第一用户进程的页表存储于所述MMU的翻译查找缓存TLB中;
    访问模块,设置为根据所述TLB中的所述第一用户进程的页表,确定所述第一用户进程访问的内存单元的物理地址,以使所述第一用户进程根据所述内存单元的物理地址访问对应的内存单元。
  6. 根据权利要求5所述的系统,其中,所述划分模块是设置为:
    将4GB的线性地址空间均分为系统地址空间和用户地址空间,其中,0-2GB为用户地址空间,3-4GB为系统地址空间。
  7. 根据权利要求5所述的系统,其中,还包括:
    清空模块,设置为在将所述第一用户进程的页表存储于所述MMU的TLB中之后,当将所述第一用户进程切换为第二用户进程时,清空所述TLB内的所述第一用户进程的页表;
    修改模块,设置为将所述第二寄存器内的所述第一用户进程的用户地址空间的页目录的物理地址修改为所述第二用户进程的用户地址空间的页目录的物理地址;
    所述页目录获取模块,还设置为根据所述第二寄存器中的第二用户进程的用户地址空间的页目录的物理地址获取所述第二用户进程的用户地址空间的页目录;
    所述存储取模块,还设置为根据所述系统地址空间的页目录和所述第二用户进程的用户地址空间的页目录获取所述第二用户进程的页表;以及将所述第 二用户进程的页表存储于所述TLB中。
  8. 根据权利要求5或7所述的系统,其中,所述TLB设置有用于清空用户地址空间的高速缓存的第一接口;
    所述清空模块是设置为:通过所述第一接口清空所述TLB内的第一用户进程的页表。
  9. 一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令用于执行权利要求1-4任一项所述的方法。
PCT/CN2017/107852 2016-10-27 2017-10-26 内存管理方法及系统 WO2018077219A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610955230.2A CN106502924B (zh) 2016-10-27 2016-10-27 一种内存优化方法及系统
CN201610955230.2 2016-10-27

Publications (1)

Publication Number Publication Date
WO2018077219A1 true WO2018077219A1 (zh) 2018-05-03

Family

ID=58322360

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/107852 WO2018077219A1 (zh) 2016-10-27 2017-10-26 内存管理方法及系统

Country Status (2)

Country Link
CN (1) CN106502924B (zh)
WO (1) WO2018077219A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106502924B (zh) * 2016-10-27 2020-02-07 深圳创维数字技术有限公司 一种内存优化方法及系统
CN109766286A (zh) * 2018-11-26 2019-05-17 武汉船舶通信研究所(中国船舶重工集团公司第七二二研究所) 一种内存访问方法及装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101178693A (zh) * 2007-12-14 2008-05-14 沈阳东软软件股份有限公司 一种数据缓存方法及系统
US20090182976A1 (en) * 2008-01-15 2009-07-16 Vmware, Inc. Large-Page Optimization in Virtual Memory Paging Systems
CN102662869A (zh) * 2012-04-01 2012-09-12 龙芯中科技术有限公司 虚拟机中的内存访问方法和装置及查找器
CN103164348A (zh) * 2013-02-28 2013-06-19 浙江大学 一种多系统下对实时操作系统所占用内存的保护方法
CN105988875A (zh) * 2015-03-04 2016-10-05 华为技术有限公司 一种运行进程的方法及装置
US20160321186A1 (en) * 2012-11-02 2016-11-03 International Business Machines Corporation Suppressing virtual address translation utilizing bits and instruction tagging
CN106502924A (zh) * 2016-10-27 2017-03-15 深圳创维数字技术有限公司 一种内存优化方法及系统

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100342353C (zh) * 2006-04-07 2007-10-10 浙江大学 嵌入式操作系统中进程映射实现方法
US8799592B2 (en) * 2011-04-20 2014-08-05 International Business Machines Corporation Direct memory access-like data transfer between guest operating systems
CN102306108B (zh) * 2011-08-01 2014-04-23 西安交通大学 Arm 虚拟机中基于mmu 的外设访问控制的实现方法
US8578129B2 (en) * 2011-12-14 2013-11-05 Advanced Micro Devices, Inc. Infrastructure support for accelerated processing device memory paging without operating system integration
CN105283855B (zh) * 2014-04-25 2018-01-23 华为技术有限公司 一种寻址方法及装置
CN105488388A (zh) * 2015-12-22 2016-04-13 中软信息系统工程有限公司 一种基于cpu时空隔离机制实现应用软件行为监控系统的方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101178693A (zh) * 2007-12-14 2008-05-14 沈阳东软软件股份有限公司 一种数据缓存方法及系统
US20090182976A1 (en) * 2008-01-15 2009-07-16 Vmware, Inc. Large-Page Optimization in Virtual Memory Paging Systems
CN102662869A (zh) * 2012-04-01 2012-09-12 龙芯中科技术有限公司 虚拟机中的内存访问方法和装置及查找器
US20160321186A1 (en) * 2012-11-02 2016-11-03 International Business Machines Corporation Suppressing virtual address translation utilizing bits and instruction tagging
CN103164348A (zh) * 2013-02-28 2013-06-19 浙江大学 一种多系统下对实时操作系统所占用内存的保护方法
CN105988875A (zh) * 2015-03-04 2016-10-05 华为技术有限公司 一种运行进程的方法及装置
CN106502924A (zh) * 2016-10-27 2017-03-15 深圳创维数字技术有限公司 一种内存优化方法及系统

Also Published As

Publication number Publication date
CN106502924B (zh) 2020-02-07
CN106502924A (zh) 2017-03-15

Similar Documents

Publication Publication Date Title
US9858192B2 (en) Cross-page prefetching method, apparatus, and system
US8566563B2 (en) Translation table control
JP6728419B2 (ja) 単一のページテーブルエントリ内の複数のセットの属性フィールド
US11256445B2 (en) Virtual disk file format conversion method and apparatus
US9672583B2 (en) GPU accelerated address translation for graphics virtualization
CN110196757B (zh) 虚拟机的tlb填写方法、装置及存储介质
WO2016082191A1 (zh) 访问文件的方法和装置
US20160378683A1 (en) 64KB Page System that Supports 4KB Page Operations
US9146879B1 (en) Virtual memory management for real-time embedded devices
US20190004800A1 (en) Smart memory data store or load method and apparatus
TWI526832B (zh) 用於減少執行硬體表搜尋(hwtw)所需的時間和計算資源量的方法和系統
WO2015161506A1 (zh) 一种寻址方法及装置
US8898429B2 (en) Application processor and a computing system having the same
CN107278292B (zh) 一种虚拟机内存的映射方法、装置及数据传输设备
US20090282198A1 (en) Systems and methods for optimizing buffer sharing between cache-incoherent cores
WO2021061465A1 (en) Address translation methods and systems
US20190102317A1 (en) Technologies for flexible virtual function queue assignment
TW201717029A (zh) 對於可轉換記憶體的頁面的保護容器頁面與正規頁面類型表示的選擇性檢查之多頁面檢查提示
TWI497296B (zh) 記憶體配置與分頁位址轉換系統及方法
WO2018077219A1 (zh) 内存管理方法及系统
US11467766B2 (en) Information processing method, apparatus, device, and system
CN107025180B (zh) 内存管理方法和装置
US8700865B1 (en) Compressed data access system and method
US20200192818A1 (en) Translation lookaside buffer cache marker scheme for emulating single-cycle page table entry invalidation
CN114925002A (zh) 电子装置、电子设备和地址转换方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17865229

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17865229

Country of ref document: EP

Kind code of ref document: A1