WO2018196839A1 - 一种内存访问方法及计算机系统 - Google Patents

一种内存访问方法及计算机系统 Download PDF

Info

Publication number
WO2018196839A1
WO2018196839A1 PCT/CN2018/084777 CN2018084777W WO2018196839A1 WO 2018196839 A1 WO2018196839 A1 WO 2018196839A1 CN 2018084777 W CN2018084777 W CN 2018084777W WO 2018196839 A1 WO2018196839 A1 WO 2018196839A1
Authority
WO
WIPO (PCT)
Prior art keywords
page
memory
small
physical
physical address
Prior art date
Application number
PCT/CN2018/084777
Other languages
English (en)
French (fr)
Inventor
刘海坤
陈吉
余国生
Original Assignee
华为技术有限公司
华中科技大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司, 华中科技大学 filed Critical 华为技术有限公司
Priority to EP18790799.3A priority Critical patent/EP3608788B1/en
Publication of WO2018196839A1 publication Critical patent/WO2018196839A1/zh
Priority to US16/664,757 priority patent/US20200057729A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1027Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB]
    • G06F12/1036Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] for multiple virtual address spaces, e.g. segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1009Address translation using page tables, e.g. page table structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1027Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0647Migration mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/20Employing a main memory using a specific memory technology
    • G06F2212/205Hybrid memory, e.g. using both volatile and non-volatile memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/25Using a specific main memory architecture
    • G06F2212/251Local memory within processor subsystem
    • G06F2212/2515Local memory within processor subsystem being configurable for different purposes, e.g. as cache or non-cache memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/65Details of virtual memory and virtual address translation
    • G06F2212/657Virtual address space management

Definitions

  • the present application relates to the field of computer technologies, and in particular, to a memory access method and a computer system.
  • NVM non-volatile memory
  • DRAM Dynamic Random Access Memory
  • NVM non-volatile memory
  • a hybrid memory is formed to expand the memory capacity. Since NVM is slower to read and write than DRAM and has a shorter write life than DRAM, in order to increase the speed of memory access and the service life of mixed memory, memory blocks with frequent read and write operations in NVM are usually migrated to DRAM.
  • the computer system converts the virtual memory and the physical memory through a Translation Lookaside Buffer (TLB).
  • TLB Translation Lookaside Buffer
  • the physical page of the memory is usually set to a large page, such as 2M.
  • the mixed memory is combined with the physical large page, the physical large page of the NVM needs to be replaced with multiple physical small pages, and the physical small pages with frequent read and write operations are migrated to the DRAM.
  • the granularity of memory addressing by the computer system is changed from a physical large page to a physical small page, and the probability of hitting the mapping between the virtual address and the physical address in the TLB is lowered, and the address conversion performance is degraded.
  • the embodiment of the present application provides a memory access method and a computer system, which can ensure a hit rate of a memory when part of data in a large page is migrated.
  • the embodiment of the present application provides a memory access method, where the memory access method is applied to a computer system including mixed memory, where the hybrid memory includes a first memory and a second memory, wherein the first
  • the memory is a non-volatile memory.
  • the memory access method includes the following steps: First, a memory management unit (MMU) receives a first access request, where the access request carries a first virtual address; and then, the MMU is first according to the computer system.
  • MMU memory management unit
  • the page table cache converts the first virtual address into a first physical address, wherein the first physical address is a physical address of a first large page in the first memory, and the first large page includes a plurality of small pages; then, in the process of accessing the first memory by the memory controller according to the first physical address, determining that data of the first small page of the first large page is migrated to the first And in the second memory, accessing the second memory according to the second physical address stored in the first small page, wherein the second physical address is a physicality of the second small page in the second memory An address, wherein the second small page stores data migrated from the first small page, wherein the second memory includes a plurality of small pages, and a size of the small page in the second memory is smaller than First Large page size of the reservoir.
  • the memory pages in the page table of the computer system are still set in the form of large pages.
  • the computer system provided by the embodiment of the present invention sets a plurality of small pages in a large page.
  • small pages within the physical large page can be migrated separately.
  • the memory controller accesses the non-volatile memory according to the first physical address of the first large page, if it is determined that the data of the first small page in the first large page has been migrated to the second memory (ie, volatile memory), the memory controller can access the migrated data according to the physical address of the second small page stored in the first small page.
  • the access can be performed according to the large pages, thereby ensuring excellent address conversion performance of the large page memory and satisfying the mixing.
  • the need for memory hot data migration Thereby, the hit rate of the memory can be secured in the case where part of the data in the large page is migrated.
  • the computer system monitors the number of accesses of each small page of the physical large page, and migrates the data of the small page to the DRAM 52 when the set number of accesses of any small page is checked.
  • the address of the physical small page of the DRAM 52 is added to the small page where the data is migrated. Since the address of the physical small page of the second memory is added within the small page where the data is migrated, the computer system can continue to locate the small page according to the mapping of the physical large page and the virtual page, and read the second memory from the small page. The address of the physical small page, which enables access to the data migrated to the second memory.
  • the computer system maintains a bitmap in which information indicating whether each small page of the first memory is migrated is stored, and the bitmap is mapped for each page that is migrated for each data.
  • An identifier indicating that the data in the small page has been migrated is set in the middle. After migrating the data of the first small page into the second small page, setting a first identifier in the set bitmap, the first identifier is used to indicate that data in the first small page has been Was moved.
  • the computer system further includes a second page table cache, after the data in the first small page is migrated to the second small page, further in the A mapping relationship between the second virtual address and the second physical address is added to the second page table cache, where the second page table cache is used to record a mapping relationship between the virtual address and the physical address of the small page of the second memory.
  • the MMU 20 may quickly determine, according to the mapping in the second page table cache, the physical address of the memory storing the data as a physical small page in the second memory, and access the target according to the address of the physical small page. Data can reduce the time-consuming memory access and improve the efficiency of memory access.
  • the computer system accesses the data migrated to the second memory: the MMU 20 receives the second access request, the second access request includes the second virtual address, and the MMU 20 is configured according to the The second page table cache obtains the second physical address having a mapping relationship with the second virtual address; then, the MMU 20 sends the second physical address to the memory controller, and the memory controller accesses the Second memory.
  • the embodiment of the present application provides a computer system, including a processor, a memory management unit MMU, a memory controller, and a hybrid memory, where the hybrid memory includes a first memory and a second memory, the first The memory is a non-volatile memory, and the second memory is a volatile memory; the MMU is configured to: receive a first access request sent by the processor, where the access request carries a first virtual address;
  • the first page table cache converts the first virtual address into a first physical address, the first physical address being a physical address of a first large page in the first memory, wherein the first page table cache A mapping relationship between a virtual address and a physical address of a large page of the first memory, the large page of the first memory including a plurality of small pages.
  • the memory controller is configured to access the first memory according to the first physical address, and in the process of accessing the first memory according to the first physical address, when determining the first large page
  • the second memory is accessed according to the second physical address stored in the first small page, wherein the second physical address is the a physical address of the second small page in the second memory, wherein the second small page stores data migrated from the first small page, and the second memory includes a plurality of small pages
  • the size of the small page in the second memory is smaller than the size of the large page in the first memory.
  • the memory controller is further configured to:
  • the memory controller is further configured to: after migrating the data of the first small page into the second small page, set a first identifier in a set bitmap The first identifier is used to indicate that data in the first small page has been migrated.
  • the computer system further includes a second page table cache, where the second page table cache is configured to record a mapping relationship between the virtual address and a physical address of the small page of the second memory. ;
  • the processor is further configured to: after the data in the first small page is migrated into the second small page, add a second virtual address and the second physical in the second page table cache The mapping relationship of addresses.
  • the MMU is further configured to: receive a second access request sent by the processor, where the second access request includes a second virtual address; and cache according to the second page table Obtaining the second physical address that has a mapping relationship with the second virtual address; the memory controller is further configured to: access the second memory according to the second physical address.
  • an embodiment of the present application provides a memory access device, which is applied to perform memory access in a computer system.
  • the computer system includes a hybrid memory including a first memory and a second memory, wherein the first memory is a non-volatile memory and the second memory is a volatile memory.
  • Memory access devices include:
  • a receiving module configured to receive a first access request, where the access request carries a first virtual address
  • a conversion module configured to convert the first virtual address into a first physical address according to the first page table cache in the computer system, where the first physical address is a physical address of the first large page in the first memory, the first largest The page contains multiple small pages;
  • an access module configured to: when the data of the first small page in the first large page is migrated into the second memory, in the process of accessing the first memory according to the first physical address, according to the storage in the first small page
  • the second physical address accesses the second memory, wherein the second physical address is a physical address of the second small page in the second memory, and the second small page stores data migrated from the first small page, wherein the second The memory includes a plurality of small pages, and the size of the small pages in the second memory is smaller than the size of the large pages in the first memory.
  • the memory access device further includes: a migration module, configured to migrate data in the first small page to the second small page when the number of accesses in the first small page exceeds a set threshold; The second physical address of the second small page is stored in the first small page.
  • a migration module configured to migrate data in the first small page to the second small page when the number of accesses in the first small page exceeds a set threshold; The second physical address of the second small page is stored in the first small page.
  • the memory access device further includes:
  • an identifier module configured to set a first identifier in the set bitmap after the data of the first small page is migrated into the second small page, where the first identifier is used to indicate that the data in the first small page has been migrated.
  • the computer system further includes a second page table cache
  • the memory access device further includes:
  • mapping module configured to add a mapping relationship between the second virtual address and the second physical address in the second page table cache after the data in the first small page is migrated to the second small page, and the second page table cache is used The mapping relationship between the virtual address and the physical address of the small page of the second memory is recorded.
  • the receiving module is further configured to: receive a second access request, where the second access request includes a second virtual address; and obtain, by the second page table cache, a second physical that has a mapping relationship with the second virtual address. address;
  • the access module is further configured to: access the second memory according to the second physical address.
  • the application further provides a computer program product comprising program code, the program code comprising instructions executed by a computer to implement the first aspect and any one of the first aspects The method described in the implementation.
  • the present application further provides a computer readable storage medium for storing program code, the program code comprising instructions executed by a computer to implement the foregoing first aspect and The method described in any one of the possible implementations of the first aspect.
  • FIG. 1 is a schematic structural diagram of a computer system according to an embodiment of the present application.
  • FIG. 2 to FIG. 5 are schematic flowcharts of a memory access method according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a memory access device according to an embodiment of the present application.
  • the present application provides a memory access method and a computer system for solving the technical problem that the mixed memory and the physical large page technology are difficult to be combined.
  • the memory access method and the computer system are based on the same invention concept. Since the memory access method and the computer system solve the problem similarly, the implementation of the computer system and the method can be referred to each other, and the repeated description is not repeated.
  • the "data” described in the embodiment of the present application is generalized data, and may be an instruction code of an application or data used by an application to run.
  • a plurality of the embodiments of the present application refer to two or more.
  • the terms "first”, “second” and the like are used only to distinguish the purpose of description, and are not to be understood as indicating or implying relative importance, nor as an indication. Or suggest the order.
  • the computer system in the embodiment of the present application may have various forms, such as a personal computer, a server, a tablet computer, a smart phone, and the like.
  • 1 is a possible architecture of a computer system in an embodiment of the present application.
  • the computer system includes a processor 10, a memory management unit (MMU) 20, a TLB 30, a memory controller 40, and a hybrid memory 50.
  • MMU memory management unit
  • the computer system further includes auxiliary storage for expanding the data storage capacity of the computer system.
  • the processor 10 is a computing center and a control center of the computer system; the MMU 20 is configured to implement a conversion between the memory virtual address and the memory physical address, so that the processor 10 can access the hybrid memory 50 according to the memory virtual address.
  • the TLB 30 is used to store a mapping between a virtual address and a physical address of the memory. Specifically, the mapping may be a mapping between a physical page number of the memory and a virtual page number to improve the efficiency of the MMU for address translation.
  • the memory controller 40 is configured to receive a memory physical address from the MMU 20 and access the hybrid memory 50 according to the physical address of the memory.
  • the hybrid memory 50 includes a first memory and a second memory.
  • the first memory is a non-volatile memory NVM, such as a phase change memory (PCM), a ferroelectric random access memory (Ferroelectric RAM, FeRAM), and a magnetic memory.
  • NVM non-volatile memory
  • PCM phase change memory
  • FeRAM ferroelectric random access memory
  • MRAM random access memory
  • DRAM volatile memory
  • the following describes the technical solution of the embodiment of the present application by taking the first memory as the PCM 51 and the second memory as the DRAM 52 as an example.
  • the application's virtual address space is divided into multiple fixed-size virtual pages, and the physical memory is divided into physical pages of the same size.
  • any page of data can be placed on any physical page, and these physical pages can also be discontinuous.
  • the mapping between the physical page number and the virtual page number is recorded in the page table, and the page table is recorded in the memory.
  • the application reads and writes the physical address of the memory corresponding to a virtual address, first determine the virtual page number where the virtual address is located and The offset in the virtual page, the lookup page table determines the physical page corresponding to the virtual page, and then accesses the offset position in the physical page, that is, the physical address of the memory to be accessed by the application.
  • the TLB is set in the computer system as an advanced cache for address translation, and the TLB stores a common partial page table entry, which is a subset of the page table. In this way, when the computer system performs memory addressing, it can first find a matching TLB page entry in the TLB for address translation. If the page table entry of the target virtual address is not found in the TLB, that is, the TLB is missing, then Find the corresponding entry in the page table in memory.
  • the physical page is usually set to a large page, for example, the size of the physical page is set to 2M.
  • the page table may be stored in the PCM 51 or in the DRAM 52, or a part of the page table may be stored in the PCM 51, and another part of the page table may be stored in the DRAM 52. Since the cost per unit storage space of the PCM 51 is relatively low, the storage space of the PCM 51 is generally larger than the storage space of the DRAM 52. The larger storage space makes the PCM 51 fit the large page memory technology, that is, the physical page of the PCM can be set larger, for example, 2 megabytes. Section (Mbyte, MB).
  • the physical page of the PCM 51 is referred to as a physical large page
  • the physical page of the DRAM 52 is referred to as a physical small page
  • the physical large page of the PCM 51 is larger than the physical small page of the DRRAM.
  • the page table that saves the mapping between the physical large page of the PCM 51 and the virtual page in the virtual address space is referred to as a first page table
  • the first page table cache may be stored in the TLB 30.
  • the page cache is a partial page table entry of the first page table, and the MMU 20 can quickly convert the physical address in the memory access request to the physical address of the memory of the PCM 51 according to the first page table cache, and the memory controller 40 further performs the physical basis according to the memory.
  • the address is accessed by PCM51.
  • the method for accessing memory provided by the embodiment of the present application is described below, and the method includes the following steps:
  • Step 601 The processor 10 sends a memory access request to the MMU 20, where the memory access request carries a target virtual address.
  • Step 602 The MMU 20 determines a physical address of the memory corresponding to the target virtual address, and sends the physical address of the memory to the memory controller.
  • Step 603 the memory controller 40 accesses the PCM 51 according to the physical address of the memory, and when determining that the data of the small page in the accessed physical large page is migrated to the DRAM 52, the address of the physical small page of the DRAM 52 is read from the small page, according to The address of the physical small page of the DRAM 52 accesses the DRAM 52.
  • the physical large page of the PCM 51 includes a plurality of small pages, and data of any one small page of the physical large page can be separately migrated to the DRAM 52 without migrating the entire physical large page to the DRAM 52.
  • the address of the physical small page in which the data is stored in the DRAM 52 is added to the small page in which the data of the PCM 51 is migrated.
  • the MMU 20 can still access the data in the PCM 51 based on the physical large page number of the PCM 51.
  • the address of the physical small page of the DRAM 52 stored in the small page can be read, and the DRAM 52 can be accessed by jumping.
  • the memory page in the page table of the computer system is still set in the form of a large page, which ensures the high hit rate of the TLB.
  • the computer system provided by the embodiment of the present invention sets a plurality of small pages in a large page. When some of the data in a large page needs to be migrated, small pages within the physical large page can be migrated separately.
  • the memory controller accesses the non-volatile memory according to the first physical address of the first large page, if it is determined that the data of the first small page in the first large page has been migrated to the second memory (ie, volatile memory), the memory controller can access the migrated data according to the physical address of the second small page stored in the first small page. Therefore, according to the technical solution provided by the embodiment, even if the small page in the large page has been migrated, the access can be performed according to the large page, thereby ensuring excellent address conversion performance of the large page memory and satisfying the requirement. The need for mixed memory hot data migration. Thereby, the hit rate of the memory can be secured in the case where part of the data in the large page is migrated.
  • the memory access method provided by the embodiment of the present invention further includes the following steps:
  • Step 604 the memory controller 40 records the number of accesses of the small pages of the physical large page of the PCM 51.
  • step 601 can also be implemented by processor 10 running an operating system.
  • Step 605 When the number of accesses of the small pages of the physical large page of the PCM 51 exceeds the set threshold, the memory controller 40 migrates the data of the small page to the physical small page of the DRAM 52, and adds the DRAM 52 to the small page where the data is migrated. The address of the physical small page.
  • the number of visits in the above "the number of visits to the small page exceeds the set threshold" may be the total number of historical visits of the small page, or the number of accesses to the small page in the most recent preset time period.
  • the number of accesses of a small page exceeds the set threshold, it indicates that the small page is a hot data block, the data of the small page can be migrated to the physical small page of the DRAM 52, and the DRAM 52 is added to the small page where the data is migrated.
  • the address of the physical page is such that the computer system can access the data migrated to the DRAM 52 according to the flow of steps 601 through 603.
  • the size of each small page of the physical large page of the PCM 51 may be equal to the size of the physical small page of the DRAM 52, in which case a physical small page storage migrates from a small page of the physical large page. The data coming.
  • the size of each small page of the physical large page of the PCM 51 may also be larger than the size of the physical small page of the DRAM 52, in which case a small page of the physical large page is stored by the plurality of physical small pages.
  • the migrated data can add the address of the first physical small page of the plurality of physical small pages in the small page where the data is migrated.
  • the small pages in the physical large page are separately migrated instead of migrating the entire physical large page, which reduces the time consuming of data migration and the consumption of input/output (IO) resources.
  • the memory controller 40 determines that the data of the small page in the accessed physical large page is migrated to the DRAM 52, and includes multiple implementation manners:
  • the memory controller address accesses the small page, it is determined that the content stored in the small page is not data, but a physical address of the memory.
  • the computer system maintains a bitmap in which information indicating whether each small page of the PCM 51 is migrated is stored, and for each small page to which the data is migrated, the bitmap is set to indicate the small page.
  • the data has been migrated by the identity.
  • Table 1 shows a possible implementation of the bitmap.
  • the migration identifier 0 indicates that it has not been migrated, and the migration identifier 1 indicates that it has been migrated.
  • the first small page and the second small page of the physical large page B are shown in Table 1.
  • the page, the fourth small page are not migrated, and the third small page has been migrated.
  • the memory controller can determine whether any small page of PCM51 is migrated by querying the bitmap.
  • the bitmap may be stored in a storage space inside the memory controller 40, or may be stored in a storage device other than the memory controller 40, such as various cache devices.
  • the small page that is migrated for the data in the bitmap is provided with an identifier indicating that the data is migrated, so that the memory controller 40 quickly accesses the DRAM 52 from the small page read address, thereby improving the efficiency of the memory access.
  • the TLB 30 stores a second page table cache in addition to the first page table cache, where the page table entry of the second page table cache includes a virtual small page number in the virtual address space.
  • the mapping of physical small pages of the DRAM 52, the virtual small page refers to a virtual page formed by dividing the virtual address space according to the size of the physical small page of the DRAM 52, and is distinguished from the virtual page in the first page table cache, this application
  • the embodiment refers to a virtual page formed by dividing a virtual address space according to a physical large page of the PCM 51 as a virtual large page, and a virtual page formed by dividing a physical small page of the DRAM 52 into a virtual address space is referred to as a virtual small page.
  • the computer system adds the mapping between the physical small page and the virtual small page in the second page table cache in the TLB 30, so that When the processor 10 accesses the migrated data, the MMU 20 can quickly determine that the physical address of the memory storing the data is a physical small page in the DRAM 52 according to the mapping in the second page table cache, and access the target data according to the address of the physical small page.
  • the jump access is not performed according to the methods described in steps 602 to 103, which can further shorten the time consumption of memory access and improve the efficiency of memory access.
  • the first page table cache and the second page table cache may be stored in the same TLB physical entity, or the computer system includes two TLBs for storing the first page table cache and the second page table cache.
  • the above mapping of adding physical small pages to virtual small pages in the second page table cache may be performed before the migrated data is first accessed, or after the migrated data is first accessed.
  • the mapping of adding physical small pages to virtual small pages in the second page table cache can be implemented by the processor running operating system instructions.
  • the memory access method further includes the following steps:
  • Step 606 The processor 10 sends a memory access request to the MMU 20, where the memory access request carries a target virtual address.
  • Step 607 The MMU 20 hits the page table entry of the target virtual address in the second page table cache, and determines the address of the physical small page of the DRAM 52 that has a mapping relationship with the target virtual address.
  • Step 608 the MMU 20 sends the determined address of the physical small page of the DRAM 52 to the memory controller.
  • Step 609 the memory controller 40 accesses the DRAM 52 according to the address of the physical small page of the DRAM 52.
  • the computer system can quickly determine that the physical address of the memory storing the data is a physical small page in the DRAM 52 according to the second page table cache, according to the physical small The address of the page accesses the target data, improving the efficiency of memory access.
  • the PCM 51 and the DRAM 52 adopt a addressing manner of a unified address space, for example, the low address is DRAM 52 and the high address is PCM 51, which is uniformly managed by the operating system.
  • the mixed memory composed of the PCM 51 and the DRAM 52 is connected to the processor 10 through a system bus, and the memory controller 40 performs read and write access of data.
  • the mixed memory and the auxiliary memory are connected through an input/output (I/O) interface for data exchange.
  • I/O input/output
  • a flow of a memory access method applying the method provided by the embodiment of the present application is introduced, which includes:
  • Step 701 The processor 10 sends a memory access request to the MMU 20, where the memory access request carries a target virtual address. Go to step 702.
  • Step 702 The MMU 20 queries the page table entry of the target virtual address according to the first page table cache and the second page table cache stored by the TLB. If the hit in the second page table cache, then go to step 703; if the second page table cache misses, and hits in the first page table cache, then step 705; if the first page table cache and the first If the two-page table cache is missed, the TLB miss processing is performed.
  • the MMU 20 After receiving the memory access request, the MMU 20 first calculates the virtual large page number and the virtual small page number according to the target virtual address.
  • the MMU 20 queries the map of the virtual large page number in the first page table cache, and queries the mapping of the virtual small page number in the second page table cache.
  • One query order is: the MMU 20 first looks up the virtual small page number in the second page table cache, and only finds the virtual large page number in the first page table cache after the second page table cache misses.
  • Another query order is: MMU20 simultaneously queries the mapping of the virtual large page number in the first page table cache, and queries the mapping of the virtual small page number in the second page table cache, if the second page table cache hits, Stop looking in the first page table cache; if you hit in the first page table cache, you still need to continue to look in the second page table cache.
  • Step 703 The MMU 20 determines the address of the physical small page of the DRAM 52 corresponding to the virtual small page according to the second page table cache, and sends the address of the physical small page of the DRAM 52 to the memory controller. Go to step 704.
  • Step 704 the memory controller 40 accesses the DRAM 52 according to the address of the physical small page of the DRAM 52.
  • Step 705 The MMU 20 determines the address of the physical large page of the PCM 51 corresponding to the virtual large page according to the first page table cache, and sends the address of the physical large page of the PCM 51 to the memory controller. Go to step 706.
  • Step 706 The memory controller 40 determines, according to the bitmap, whether the data of the small page in the physical large page corresponding to the target virtual address is migrated. If yes, step 707 is performed; otherwise, step 708 is performed.
  • the small page in the physical large page corresponding to the target virtual address is determined as follows: the page number of the physical large page has been determined according to step 705, and the physical address of the PCM 51 corresponding to the target virtual address is the large page offset position in the physical large page.
  • the physical large page in step 702 is 2MB.
  • Step 707 The memory controller reads the address of the physical small page of the DRAM 52 from the small page, and accesses the DRAM 52 according to the address of the physical small page of the DRAM 52.
  • Step 708 the memory controller accesses the PCM 51 according to the address of the physical large page of the PCM 51. Go to step 709.
  • Step 709 The memory controller increases the number of accesses of the small pages of the accessed physical large page by one, and determines whether the number of accesses of the small page exceeds a set threshold. If yes, step 710 is performed.
  • Step 710 The memory controller migrates the data of the small page whose access times exceed the set threshold to the physical small page of the DRAM 52, and adds the address of the physical small page of the DRAM 52 in the small page where the data is migrated. Go to step 711.
  • Step 711 The processor 10 adds a mapping of the physical small page of the DRAM 52 to the virtual small page in the second page table cache.
  • the TLB miss processing is as follows: The MMU 20 queries the first page table in the memory, finds the mapping of the virtual large page, and adds the mapping to the first page table cache. After the TLB deletion process is performed, step 705 is continued.
  • the MMU 20 can quickly find the page table entry corresponding to the target virtual address in the first page table cache and the second page table cache saved by the TLB 30. To quickly determine the target physical address and improve memory access efficiency.
  • a bitmap for characterizing whether data of a small page of a physical large page is migrated is stored in a first page table cache, and in the above step 702, if not in the second page table cache If the hit, and the hit in the first page table cache, the MMU 20 queries the bitmap to further determine whether the data of the small page in the physical large page corresponding to the target virtual address is migrated, and if so, instructs the memory controller 40 to perform step 707. Otherwise, the memory controller 40 is instructed to perform step 708.
  • Table 2 is an illustration of the first page table cache containing bitmaps. It can be determined from Table 2 that the physical large page B corresponding to the virtual large page b, and the first small page, the second small page, and the fourth small page of the physical large page B are not migrated, and the third small The page has been migrated.
  • the technical solution provided by the embodiment of the present invention may be combined with a cache technology.
  • the computer system may first find the physical large in the cache.
  • the data corresponding to the page address or the physical small page address is accessed by the memory controller according to the physical large page address or the physical small page address only after the cache misses.
  • the memory controller 40 when data of a small page of a page physical large page of the PCM 51 needs to be migrated to the DRAM 52, if there is no free storage space in the DRAM 52, the memory controller 40 according to a preset page replacement algorithm One or more physical small pages in DRAM 52 are migrated back into PCM 51.
  • the preset page replacement algorithm may have multiple implementation manners, including but not limited to the following algorithms:
  • NRU least recently used
  • the optimal replacement algorithm that is, data that is no longer accessed in the DRAM 52 is migrated back to the PCM 51, or data that is not accessed for the longest time in the DRAM 52 is migrated back to the PCM 51.
  • the preset page replacement algorithm may also include a clock algorithm, a second chance algorithm, and the like. Please refer to the prior art, and the embodiments of the present application are not described in detail herein.
  • the stored data in the DRAM 52 can be migrated back to the PCM 51 according to a preset various page replacement algorithm, so that the DRAM 52 can always accommodate the most recent write overheating.
  • the small page data improves the utilization of the storage space of the DRAM 52.
  • the computer system is provided with a bitmap indicating whether the data of the small page of the physical large page of the PCM 51 is migrated, the data migrated from the small page of the PCM 51 in the DRAM 52 is migrated back. After the small page, the identifier characterizing the small page is migrated from the bitmap.
  • an embodiment of the present application provides a computer system including: a processor 10 , an MMU 20 , a memory controller 40 , and a hybrid memory 50 .
  • the processor 10 can communicate with the MMU 20, the memory controller 40, and the hybrid memory 50 via a bus.
  • the hybrid memory 50 includes a first memory and a second memory, the first memory being a non-volatile memory, such as PCM 51 in FIG. 1, and the second memory being a volatile memory, such as DRAM 52 in FIG.
  • MMU20 is used to:
  • the first page table cache Converting the first virtual address to the first physical address according to the first page table cache, where the first physical address is a physical address of the first large page in the first memory, wherein the first page table cache is used to record the virtual address and the first a mapping relationship of physical addresses of large pages of a memory, the large page of the first memory includes a plurality of small pages;
  • the memory controller 40 is configured to access the first memory according to the first physical address, and in the process of accessing the first memory according to the first physical address, when determining that the data of the first small page in the first large page is migrated to the second And storing, in the memory, the second memory according to the second physical address stored in the first small page, wherein the second physical address is a physical address of the second small page in the second memory, and the second small page stores the second Data migrated from a small page, the second memory includes a plurality of small pages, and the size of the small pages in the second memory is smaller than the size of the large page in the first memory.
  • the memory controller 40 is also used to:
  • the second physical address of the second small page is stored in the first small page.
  • the memory controller 40 is also used to:
  • a first identifier is set in the set bitmap, the first identifier is used to indicate that the data in the first small page has been migrated.
  • the computer system further includes a second page table cache, where the second page table cache is used to record a mapping relationship between the virtual address and the physical address of the small page of the second memory;
  • the processor 10 is also used to:
  • the MMU20 is also used to:
  • the memory controller 40 is further configured to: access the second memory according to the second physical address.
  • the computer system further includes a TLB 30 for storing the first page table cache.
  • the TLB 30 is also used to store a second page table cache.
  • the processor 10 described above may be a processor component or a collective name of a plurality of processor components.
  • the processor may be a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement the embodiments of the present invention.
  • CPU Central Processing Unit
  • ASIC Application Specific Integrated Circuit
  • microprocessors Digital Signal Processors, DSPs
  • FPGAs Field Programmable Gate Arrays
  • the processor 10, the MMU 20, the TLB 30, and the memory controller 40 described above may be integrated with the processor 10 or may be independent of the processor 10.
  • the above MMU 20 and TLB 30 can be integrated or can be two independent devices.
  • the TLB 30 may be one TLB device or two TLB devices. In the latter case, two TLB devices are respectively used for storing the second.
  • the implementation of the hybrid memory 50 described above has been described in the foregoing description of FIG. 1, and will not be repeated here.
  • the embodiment of the present application further provides a computer readable storage medium for storing computer software instructions required to execute the processor 10 described above, which includes a program for executing the processor 10 to be executed.
  • FIG. 6 is a schematic diagram of a memory access device according to an embodiment of the present disclosure, where the memory access device is applied to perform memory access in a computer system.
  • the computer system includes a hybrid memory including a first memory and a second memory, wherein the first memory is a non-volatile memory and the second memory is a volatile memory.
  • Memory access devices include:
  • the receiving module 801 is configured to receive a first access request, where the access request carries a first virtual address.
  • the conversion module 802 is configured to convert the first virtual address into a first physical address according to the first page table cache in the computer system, where the first physical address is a physical address of the first large page in the first memory, first Large pages contain multiple small pages;
  • the access module 803 is configured to: when the data of the first small page in the first large page is migrated into the second memory, in the process of accessing the first memory according to the first physical address, according to the first small page
  • the second physical address accesses the second memory, wherein the second physical address is a physical address of the second small page in the second memory, and the second small page stores data migrated from the first small page, where
  • the two memories include a plurality of small pages, and the size of the small pages in the second memory is smaller than the size of the large pages in the first memory.
  • the memory access device further includes:
  • the migration module 804 is configured to: when the number of accesses in the first small page exceeds a set threshold, migrate data in the first small page to the second small page; and store the second small page in the first small page Two physical addresses.
  • the memory access device further includes:
  • the identifier module 805 is configured to set a first identifier in the set bitmap after the data of the first small page is migrated into the second small page, where the first identifier is used to indicate that the data in the first small page has been migrated .
  • the computer system further includes a second page table cache
  • the memory access device further includes:
  • the mapping module 806 is configured to: after the data in the first small page is migrated into the second small page, add a mapping relationship between the second virtual address and the second physical address in the second page table cache, and the second page table cache A mapping relationship for recording a physical address of a virtual address and a small page of the second memory.
  • the receiving module 801 is further configured to: receive a second access request, where the second access request includes a second virtual address; and obtain, according to the second page table cache, a second mapping relationship with the second virtual address.
  • the access module 803 is further configured to: access the second memory according to the second physical address.
  • each module of the above memory access device reference may be made to the implementation of each step in the memory access method described in FIG. 2 to FIG. 5.
  • the embodiment of the invention further provides a computer program product for data processing, comprising a computer readable storage medium storing program code, the program code comprising instructions for executing the method flow described in any one of the foregoing method embodiments.
  • a person skilled in the art can understand that the foregoing storage medium includes: a USB flash drive, a mobile hard disk, a magnetic disk, an optical disk, a random access memory (RAM), a solid state disk (SSD), or a nonvolatile.
  • a non-transitory machine readable medium that can store program code, such as a non-volatile memory.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

本申请实施例提供一种内存访问方法及计算机系统。该内存访问方法应用于包含有混合内存的计算机系统中,混合内存包含有第一存储器以及第二存储器,方法包括:接收带有第一虚拟地址的第一访问请求;根据计算机系统中的第一页表缓存将第一虚拟地址转换为第一物理地址,第一物理地址为第一存储器中的第一大页的物理地址,第一大页中包含有多个小页;当确定第一大页中的第一小页的数据被迁移到第二存储器中时,根据第一小页中存储的第二物理地址访问第二存储器,其中,第二物理地址为第二存储器中的第二小页的物理地址,第二小页中存储有从第一小页中迁移出的数据,其中第二存储器中包括多个小页。

Description

一种内存访问方法及计算机系统 技术领域
本申请涉及计算机技术领域,尤其涉及一种内存访问方法及计算机系统。
背景技术
内存通常由动态随机存取存储器(Dynamic Random Access Memory,DRAM)实现,但DRAM存在存储密度低,存储容量较小的问题,因此,可以在DRAM基础上引入非易失存储器(NonVolatile Memory,NVM)形成混合内存,实现内存容量的扩充。由于NVM的读写速度比DRAM慢且写寿命也比DRAM短,所以为了提高内存访问的速度以及混合内存的使用寿命,通常将NVM内的读写操作频繁的存储块迁移至DRAM。
计算机系统通过翻译后备缓冲器(Translation Lookaside Buffer,TLB)进行虚拟内存与物理内存的转换,为了提高TLB命中的几率,提高地址转换的效率,通常将内存物理页设置为大页,如2M。在混合内存与物理大页结合应用时,需要将NVM的物理大页替换为多个物理小页,将读写操作频繁的物理小页迁移至DRAM。但是,计算机系统进行内存寻址的粒度由物理大页变为物理小页,TLB中命中虚拟地址与物理地址的映射的概率降低,地址转换性能降低。
发明内容
本申请实施例提供一种内存访问方法及计算机系统,能够在大页中的部分数据被迁移的情况下保障内存的命中率。
第一方面,本申请实施例提供一种内存访问方法,该内存访问方法应用于包含有混合内存的计算机系统中,所述混合内存包含有第一存储器以及第二存储器,其中,所述第一存储器为非易失性存储器。所述内存访问方法包括步骤:首先,内存管理单元(memory management unit,MMU)接收第一访问请求,所述访问请求中携带有第一虚拟地址;然后,MMU根据所述计算机系统中的第一页表缓存将所述第一虚拟地址转换为第一物理地址,其中,所述第一物理地址为所述第一存储器中的第一大页的物理地址,所述第一大页中包含有多个小页;然后,在内存控制器根据所述第一物理地址访问所述第一存储器的过程中,当确定所述第一大页中的第一小页的数据被迁移到所述第二存储器中时,根据所述第一小页中存储的第二物理地址访问所述第二存储器,其中,所述第二物理地址为所述所述第二存储器中的第二小页的物理地址,所述第二小页中存储有从所述第一小页中迁移出的数据,其中所述第二存储器中包括多个小页,所述第二存储器中的小页的大小小于所述第一存储器中的大页的大小。
本实施例提供的技术方案中,为了保障TLB的高命中率,计算机系统的页表中的内存页仍然以的大页的形式设置。并且,本发明实施例提供的计算机系统,在大页中设置了多个小页。当大页中的部分数据需要迁移时,可以单独迁移物理大页内的小页。在访问过程中,当内存控制器根据第一大页的第一物理地址访问非易失性存储器时,若确定所述第一大页中的第一小页的数据已经被迁移到第二存储器(即易失性存储器)中,则内存控制器 可以根据第一小页中存储的第二小页的物理地址访问被迁移的数据。因此,根据本实施例提供的技术方案,即使在大页中的小页已经被迁移的情况下,依然能够根据大页去访问,既保障了大页内存优异的地址转换性能,又满足了混合内存热数据迁移的需求。从而能够在大页中的部分数据被迁移的情况下保障内存的命中率。
在一种可选的实施方式中,计算机系统监测物理大页的每个小页的访问次数,并在任意一个小页的访问次数查过设置阈值时,将该小页的数据迁移至DRAM52的物理小页中,并在数据被迁移的小页中添加DRAM52的物理小页的地址。由于在数据被迁移的小页内添加第二存储器的物理小页的地址,所以计算机系统可以继续根据物理大页与虚拟页的映射定位该小页,并从该小页读取第二存储器的物理小页的地址,实现对迁移至第二存储器的数据的访问。
在一种可选的实施方式中,计算机系统维护有一位图,该位图中存储表明第一存储器的每个小页是否被迁移的信息,针对每个数据被迁移的小页,该位图中均设置有指示该小页中的数据已经被迁移的标识。在将所述第一小页的数据迁移至所述第二小页中之后,在设置的位图中设置第一标识,所述第一标识用于指示所述第一小页中的数据已经被迁移。
在一种可选的实施方式中,所述计算机系统还包括第二页表缓存,所述将所述第一小页中的数据迁移至所述第二小页中之后,进一步在所述第二页表缓存中添加第二虚拟地址与所述第二物理地址的映射关系,所述第二页表缓存用于记录虚拟地址与所述第二存储器的小页的物理地址的映射关系。进而在访问该被迁移的数据时,MMU20可以根据第二页表缓存中的该映射快速确定存储该数据的内存物理地址为第二存储器中的物理小页,根据该物理小页的地址访问目标数据,能够缩短内存访问的耗时,提高内存访问的效率。
在一种可选的实施方式中,计算机系统访问被迁移至第二存储器中数据的过程为:MMU20接收第二访问请求,所述第二访问请求包含有第二虚拟地址;MMU20根据所述第二页表缓存获得与所述第二虚拟地址具有映射关系的所述第二物理地址;然后,MMU20将第二物理地址发送给内存控制器,内存控制器根据所述第二物理地址访问所述第二存储器。
第二方面,本申请实施例提供一种计算机系统,包括处理器、内存管理单元MMU、内存控制器以及混合内存,其中,所述混合内存包含有第一存储器以及第二存储器,所述第一存储器为非易失性存储器,所述第二存储器为易失性存储器;所述MMU用于:接收所述处理器发送的第一访问请求,所述访问请求中携带有第一虚拟地址;根据第一页表缓存将所述第一虚拟地址转换为第一物理地址,所述第一物理地址为所述第一存储器中的第一大页的物理地址,其中,所述第一页表缓存用于记录虚拟地址与所述第一存储器的大页的物理地址的映射关系,所述第一存储器的大页包括多个小页。所述内存控制器,用于根据所述第一物理地址访问所述第一存储器,并在根据所述第一物理地址访问所述第一存储器的过程中,当确定所述第一大页中的第一小页的数据被迁移到所述第二存储器中时,根据所述第一小页中存储的第二物理地址访问所述第二存储器,其中,所述第二物理地址为所述所述第二存储器中的第二小页的物理地址,所述第二小页中存储有从所述第一小页中迁移出的数据,所述第二存储器中包括多个小页,所述第二存储器中的小页的大小小于所述第一存储器中的大页的大小。
在一种可选的实施方式中,所述内存控制器还用于:
当所述第一小页中的访问次数超过设置阈值时,将所述第一小页中的数据迁移至所述 第二小页中;在所述第一小页中存储所述第二小页的所述第二物理地址。
在一种可选的实施方式中,所述内存控制器还用于:在将所述第一小页的数据迁移至所述第二小页中之后,在设置的位图中设置第一标识,所述第一标识用于指示所述第一小页中的数据已经被迁移。
在一种可选的实施方式中,所述计算机系统中还包括第二页表缓存,所述第二页表缓存用于记录虚拟地址与所述第二存储器的小页的物理地址的映射关系;
所述处理器还用于:在所述第一小页中的数据被迁移至所述第二小页中之后,在所述第二页表缓存中添加第二虚拟地址与所述第二物理地址的映射关系。
在一种可选的实施方式中,所述MMU还用于:接收所述处理器发送的第二访问请求,所述第二访问请求包含有第二虚拟地址;根据所述第二页表缓存获得与所述第二虚拟地址具有映射关系的所述第二物理地址;所述内存控制器还用于:根据所述第二物理地址访问所述第二存储器。
第三方面,本申请实施例提供一种内存访问装置,该内存访问装置应用于在计算机系统中进行内存访问。计算机系统包括混合内存,混合内存包含有第一存储器以及第二存储器,其中,第一存储器为非易失性存储器,第二存储器为易失性存储器。内存访问装置包括:
接收模块,用于接收第一访问请求,访问请求中携带有第一虚拟地址;
转换模块,用于根据计算机系统中的第一页表缓存将第一虚拟地址转换为第一物理地址,其中,第一物理地址为第一存储器中的第一大页的物理地址,第一大页中包含有多个小页;
访问模块,用于在根据第一物理地址访问第一存储器的过程中,当确定第一大页中的第一小页的数据被迁移到第二存储器中时,根据第一小页中存储的第二物理地址访问第二存储器,其中,第二物理地址为第二存储器中的第二小页的物理地址,第二小页中存储有从第一小页中迁移出的数据,其中第二存储器中包括多个小页,第二存储器中的小页的大小小于第一存储器中的大页的大小。
作为一种可选的方式,内存访问装置还包括:迁移模块,用于当该第一小页中的访问次数超过设置阈值时,将第一小页中的数据迁移至第二小页中;在第一小页中存储第二小页的第二物理地址。
在一种可选的实施方式中,内存访问装置还包括:
标识模块,用于在将第一小页的数据迁移至第二小页中之后,在设置的位图中设置第一标识,第一标识用于指示第一小页中的数据已经被迁移。
在一种可选的实施方式中,计算机系统还包括第二页表缓存,内存访问装置还包括:
映射模块,用于在将第一小页中的数据迁移至第二小页中之后,在第二页表缓存中添加第二虚拟地址与第二物理地址的映射关系,第二页表缓存用于记录虚拟地址与第二存储器的小页的物理地址的映射关系。
作为一种可选的方式,接收模块还用于:接收第二访问请求,第二访问请求包含有第二虚拟地址;根据第二页表缓存获得与第二虚拟地址具有映射关系的第二物理地址;
访问模块还用于:根据第二物理地址访问第二存储器。
第四方面,本申请还提供了一种计算机程序产品,包括程序代码,所述程序代码包括 的指令被计算机所执行,以实现所述第一方面以及所述第一方面的任意一种可能的实现方式中所述的方法。
第五方面,本申请还提供了一种计算机可读存储介质,所述计算机可读存储介质用于存储程序代码,所述程序代码包括的指令被计算机所执行,以实现前述第一方面以及所述第一方面的任意一种可能的实现方式中所述的方法。
附图说明
图1为本申请实施例提供的计算机系统的结构示意图;
图2至图5为本申请实施例提供的内存访问方法的流程示意图;
图6为本申请实施例提供的内存访问装置的示意图。
具体实施方式
为了使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请作进一步地详细描述。
本申请提供一种内存访问方法及计算机系统,用以解决混合内存与物理大页技术难以结合应用的技术问题。其中,内存访问方法和计算机系统是基于同一发明构思的,由于内存访问方法及计算机系统解决问题的原理相似,因此计算机系统与方法的实施可以相互参见,重复之处不再赘述。
本申请实施例中所述的“数据”为广义的数据,既可以是应用程序的指令代码,也可以是应用程序运行所使用的数据。本申请实施例所涉及的多个,是指两个或两个以上。另外,需要理解的是,在本申请的描述中,“第一”、“第二”等词汇,仅用于区分描述的目的,而不能理解为指示或暗示相对重要性,也不能理解为指示或暗示顺序。
本申请实施例中的计算机系统可以有多种形态,例如个人计算机、服务器、平板电脑、智能手机等。图1为本申请实施例中计算机系统的一种可能架构,计算机系统包括处理器10、内存管理单元(memory management unit,MMU)20、TLB30、内存控制器40以及混合内存50。可选的,计算机系统还包括辅助存储器,用于扩大计算机系统的数据存储容量。
其中,处理器10为计算机系统的运算中心和控制中心;MMU20用于实现内存虚拟地址与内存物理地址之间的转换,以使处理器10能够根据内存虚拟地址访问混合内存50。TLB30用于存储虚拟地址与内存物理地址之间的映射,具体的,该映射可以为内存的物理页号与虚拟页号的映射,以提高MMU进行地址转换的效率。内存控制器40用于从MMU20接收内存物理地址,根据内存物理地址访问混合内存50。混合内存50包括第一存储器以及第二存储器,第一存储器为非易失性存储器NVM,例如相变存储器(Phase Change Memory,PCM)、铁电式随机存取存储器(Ferroelectric RAM,FeRAM)、磁性随机存取存储器(Magnetic Random Access Memory、MRAM)等,第二存储器为易失性存储器,例如DRAM。
本申请实施例以下内容以第一存储器为PCM51、第二存储器为DRAM52为例,对本申请实施例的技术方案予以介绍。
在分页内存管理机制中,应用程序的虚拟地址空间划分为多个固定大小的虚拟页(page),而物理内存被划分为同样大小的物理页。应用程序加载时,可以将任意一页的数据放入任意物理页,这些物理页也可以不连续。物理页号与虚拟页号的映射记录在页表中, 页表记录在内存中,在应用程序读写某个虚拟地址对应的内存物理地址时,首先确定虚拟地址所在的虚拟页页号以及在该虚拟页内的偏移量,查找页表确定虚拟页对应的物理页,进而访问该物理页内该偏移量位置,即为应用程序要访问的内存物理地址。如果每次虚拟页到物理页的转换都需要访问内存中的页表,则会花费很多的时间。因此,在计算机系统内设置TLB作为进行地址转换的高级缓存,TLB内存储有常用的部分页表项,是页表的子集。这样计算机系统在进行内存寻址时,可以先在TLB中查找匹配的TLB页表项进行地址转换,若在TLB中查找不到目标虚拟地址的页表项,即TLB缺失(miss),则到内存中的页表内查找相应表项。为了降低TLB缺失几率,提高地址转换效率,通常将物理页设置为大页,例如,物理页的大小设置为2M。
本申请实施例中,页表既可以存储在PCM51中,也可以存储在DRAM52中,或者页表的一部分存储在PCM51中,而页表的另一部分存储在DRAM52中。由于PCM51的单位存储空间的成本较低,通常PCM51的存储空间大于DRAM52的存储空间,较大的存储空间使得PCM51契合大页内存技术,即PCM的物理页可以设置地较大,例如2兆字节(Mbyte,MB)。为了便于区分,本申请实施例中将PCM51的物理页称为物理大页,将DRAM52的物理页称为物理小页,PCM51的物理大页大于DRRAM的物理小页。
本申请实施例中,本申请实施例中,将保存PCM51的物理大页与虚拟地址空间中虚拟页的映射的页表称为第一页表,TLB30中可以存储第一页表缓存,该第一页表缓存为第一页表的部分页表项,MMU20根据该第一页表缓存可以快速将内存访问请求中的物理地址转换为PCM51的内存物理地址,内存控制器40进而根据该内存物理地址访问PCM51。
结合图2,下面介绍本申请实施例提供的访问内存方法,该方法包括如下步骤:
步骤601、处理器10向MMU20发送内存访问请求,该内存访问请求携带目标虚拟地址。
步骤602、MMU20确定目标虚拟地址对应的内存物理地址,向内存控制器发送该内存物理地址。
MMU20确定目标虚拟地址对应的内存物理地址的过程为:首先,MMU20根据目标虚拟地址计算虚拟页号,例如,对于32位的虚拟地址VA:0100 1001 0110 1010 0011 1111 0001 1011,在PCM51的物理页为2MB时,虚拟页的大小也是2MB,将VA右移(page_shift)21位,即得到虚拟页号,vpn=VA>>21,而目标虚拟地址在该虚拟页内的偏移量为offset=VA&(1<<21-1),得到虚拟地址的后21位。然后,MMU20根据虚拟页号查询TLB30中的页表缓存,即可确定虚拟页对应的物理大页,目标虚拟地址对应的内存物理地址即为该物理大页中该offset处。
步骤603、内存控制器40根据该内存物理地址访问PCM51,在确定访问的物理大页中的小页的数据被迁移至DRAM52时,从该小页中读取DRAM52的物理小页的地址,根据该DRAM52的物理小页的地址访问DRAM52。
本申请实施例中,PCM51的物理大页包括多个小页,物理大页的任意一个小页的数据可以被单独迁移至DRAM52,而不需要将整个物理大页迁移至DRAM52。在PCM51的物理大页的任一小页的数据被迁移至DRAM52的物理小页后,在PCM51的数据被迁移的小页内添加DRAM52中存放其数据的物理小页的地址。这样,MMU20仍然可以根据PCM51的物理大页号访问PCM51中的数据,在访问到数据被迁移的小页时,可以读取小页中存储 的DRAM52的物理小页的地址,跳转访问DRAM52。
因此,本申请实施例提供的技术方案中,计算机系统的页表中的内存页仍然以的大页的形式设置,保障TLB的高命中率。并且,本发明实施例提供的计算机系统,在大页中设置了多个小页。当大页中的部分数据需要迁移时,可以单独迁移物理大页内的小页。在访问过程中,当内存控制器根据第一大页的第一物理地址访问非易失性存储器时,若确定所述第一大页中的第一小页的数据已经被迁移到第二存储器(即易失性存储器)中,则内存控制器可以根据第一小页中存储的第二小页的物理地址访问被迁移的数据。因此,根据本实施例提供的技术方案,即使在大页中的小页已经被迁移的情况下,依然能够根据大页去访问,从而既保障了大页内存优异的地址转换性能,又满足了混合内存热数据迁移的需求。从而能够在大页中的部分数据被迁移的情况下保障内存的命中率。
可选的,参照图3,本发明实施例提供的内存访问方法还包括如下步骤:
步骤604、内存控制器40记录PCM51的物理大页的小页的访问次数。
在一些实施例中,步骤601也可以由处理器10运行操作系统来实现。
步骤605、内存控制器40在PCM51的物理大页的小页的访问次数超过设置阈值时,将该小页的数据迁移至DRAM52的物理小页中,并在数据被迁移的小页中添加DRAM52的物理小页的地址。
上述“小页的访问次数超过设置阈值”中的访问次数,可以是小页的历史总访问次数,也可以是对最近的预设时间段内小页的访问次数。当某个小页的访问次数超过设置阈值时,表明该小页为热数据块,可以将该小页的数据迁移至DRAM52的物理小页中,并在数据被迁移的小页中添加DRAM52的物理小页的地址,以便计算机系统根据步骤601至步骤603的流程能够访问被迁移至DRAM52的数据。
本申请实施例中,PCM51的物理大页的每个小页的大小可以等于DRAM52的物理小页的大小,在这一情形中,由一个物理小页存储从物理大页的一个小页迁移而来的数据。在一些实施例中,PCM51的物理大页的每个小页的大小也可以大于DRAM52的物理小页的大小,在这一情形中,由多个物理小页存储从物理大页的一个小页迁移而来的数据,可以在数据被迁移的小页中添加该多个物理小页的第一个物理小页的地址。
上述技术方案中,单独迁移物理大页内的小页,而不是将整个物理大页迁移,降低数据迁移的耗时以及输入/输出(input/output,IO)资源的消耗。
可选的,本申请实施例中,步骤603中,内存控制器40确定访问的物理大页中的小页的数据被迁移至DRAM52,包括多种实施方式:
其一,内存控制器地址访问该小页时,确定该小页所存储的内容不是数据,而是内存物理地址。
其二,计算机系统维护有一位图,该位图中存储表明PCM51的每个小页是否被迁移的信息,针对每个数据被迁移的小页,该位图中均设置有指示该小页中的数据已经被迁移的标识。表1为该位图的一种可能实现方式,迁移标识0表示未被迁移,迁移标识1表示已被迁移,则表1所示表明物理大页B的第一个小页、第二个小页、第四个小页未被迁移,而第三个小页已被迁移,内存控制器通过查询该位图即可确定PCM51的任意一个小页是否被迁移。
物理大页号 迁移标识序列
B 0010…
表1
本申请实施例中,该位图可以保存在内存控制器40内部的存储空间,也可以保存在内存控制器40之外的存储器件中,如各种缓存器件。
上述技术方案中,通过在位图中为数据被迁移的小页设置表明其数据被迁移的标识,以便内存控制器40快速地从小页读取地址访问DRAM52,提高内存访问的效率。
可选的,本申请实施例中,TLB30除了存储第一页表缓存之外,还存储有第二页表缓存,该第二页表缓存的页表项包括虚拟地址空间中虚拟小页号与DRAM52的物理小页的映射,所谓虚拟小页指的是根据DRAM52的物理小页的大小对虚拟地址空间进行划分形成的虚拟页,为与第一页表缓存中的虚拟页相区分,本申请实施例将根据PCM51的物理大页对虚拟地址空间划分形成的虚拟页称为虚拟大页,将DRAM52的物理小页对虚拟地址空间划分形成的虚拟页称为虚拟小页。
本申请实施例中,在将物理大页的小页的数据迁移至物理小页后,计算机系统在TLB30中的第二页表缓存中添加该物理小页与虚拟小页的映射,这样,在处理器10访问该被迁移的数据时,MMU20可以根据第二页表缓存中的该映射快速确定存储该数据的内存物理地址为DRAM52中的物理小页,根据该物理小页的地址访问目标数据,而不用再根据步骤602~103所述的方法进行跳转访问,能够进一步缩短内存访问的耗时,提高内存访问的效率。
上述第一页表缓存以及第二页表缓存可以存储在同一个TLB物理实体内,或者,计算机系统包括两个TLB,分别用于存储第一页表缓存以及第二页表缓存。上述在第二页表缓存中添加物理小页与虚拟小页的映射,可以在该被迁移的数据被首次访问之前执行,也可以在该被迁移的数据被首次访问之后执行。另外,该在第二页表缓存中添加物理小页与虚拟小页的映射可以由处理器运行操作系统指令实现。
参照图4,结合上述TLB30存储第二页表缓存的可选实施方式,内存访问方法还包括如下步骤:
步骤606、处理器10向MMU20发送内存访问请求,该内存访问请求携带目标虚拟地址。
步骤607、MMU20在第二页表缓存中命中目标虚拟地址的页表项,确定与目标虚拟地址具有映射关系的DRAM52的物理小页的地址。
步骤608、MMU20向内存控制器发送确定的该DRAM52的物理小页的地址。
步骤609、内存控制器40根据该DRAM52的物理小页的地址访问DRAM52。
上述技术方案中,在PCM51的小页被迁移至DRAM52的物理小页时,计算机系统可以根据第二页表缓存快速确定存储该数据的内存物理地址为DRAM52中的物理小页,根据该物理小页的地址访问目标数据,提高内存访问的效率。
可选的,本申请实施例中,PCM51与DRAM52采用统一地址空间的编址方式,例如,低地址为DRAM52,高地址为PCM51,由操作系统来统一管理。PCM51与DRAM52组成的混合内存与处理器10之间通过系统总线相联,通过内存控制器40进行数据的读写访问。混合内存和辅助存储器之间通过输入/输出(input/output,I/O)接口相联,进行数据交换。 当进程向操作系统申请分配内存时,仅分配PCM51内存。DRAM52用来存放从PCM51迁移过来的写热存储块的数据,不直接分配给进程。
参照图5,下面介绍应用本申请实施例提供方法的一种内存访问方法的流程,,包括:
步骤701、处理器10向MMU20发送内存访问请求,该内存访问请求携带目标虚拟地址。转向步骤702。
步骤702、MMU20根据TLB存储的第一页表缓存以及第二页表缓存查询目标虚拟地址的页表项。若在第二页表缓存中命中,则转向步骤703;若在第二页表缓存中未命中,且在第一页表缓存中命中,则执行步骤705;若在第一页表缓存以及第二页表缓存均未命中,则执行TLB缺失处理。
MMU20接收内存访问请求后,首先根据目标虚拟地址分别计算虚拟大页号以及虚拟小页号,这两个概念的解释参照前文。例如,假设PCM51的物理大页的大小为2MB,DRAM52的物理小页的大小为4KB,虚拟地址VA:0100 1001 0110 1010 0011 1111 0001 1011,则虚拟大页号big_vpn=VA>>21,即将虚拟地址右移21位;虚拟小页号small_vpn=VA>>12,即将虚拟地址右移12位。
然后,MMU20在第一页表缓存中查询虚拟大页号的映射,以及在第二页表缓存中查询虚拟小页号的映射。一种查询顺序为:MMU20首先在第二页表缓存中查找虚拟小页号,只有在第二页表缓存未命中(miss)后,才在第一页表缓存中查找虚拟大页号。另一种查询顺序为:MMU20同时在第一页表缓存中查询虚拟大页号的映射,以及在第二页表缓存中查询虚拟小页号的映射,若在第二页表缓存命中,则停止在第一页表缓存中查找;若在第一页表缓存中命中,则仍需继续在第二页表缓存中查找。
步骤703、MMU20根据第二页表缓存确定与虚拟小页对应的DRAM52的物理小页的地址,向内存控制器发送该DRAM52的物理小页的地址。转向步骤704。
步骤704、内存控制器40根据该DRAM52的物理小页的地址访问DRAM52。
步骤705、MMU20根据第一页表缓存确定与虚拟大页对应的PCM51的物理大页的地址,向内存控制器发送该PCM51的物理大页的地址。转向步骤706。
步骤706、内存控制器40根据位图判断与目标虚拟地址对应的物理大页内的小页的数据是否被迁移,若是,则执行步骤707,否则,执行步骤708。
目标虚拟地址对应的物理大页内的小页的确定方式为:根据步骤705已经确定物理大页的页号,目标虚拟地址对应的PCM51物理地址为该物理大页内的大页偏移量位置处,沿用步骤702中物理大页为2MB的例子,大页偏移量big_offset=VA&(1<<21-1),即为虚拟地址的后21位,根据物理大页号以及大爷偏移量,即可定位出目标虚拟地址对应的物理大页内的小页。
步骤707、内存控制器从该小页中读取DRAM52的物理小页的地址,根据该DRAM52的物理小页的地址访问DRAM52。
步骤708、内存控制器根据PCM51的物理大页的地址访问PCM51。转向步骤709。
步骤709、内存控制器将被访问的物理大页的小页的访问次数加1,判断该小页的访问次数是否超过设置阈值。若超过,则执行步骤710。
步骤710、内存控制器将访问次数超过设置阈值的小页的数据迁移至DRAM52的物理小页,并在数据被迁移的小页中添加DRAM52的物理小页的地址。转向步骤711。
步骤711、处理器10在第二页表缓存中添加DRAM52的物理小页与虚拟小页的映射。
而TLB缺失处理为:MMU20查询内存中的第一页表,查找到虚拟大页的映射,并将该映射添加至第一页表缓存。在进行TLB缺失处理后,继续执行步骤705。
上述流程中,在保留PCM51的大页内存,保障TLB的高命中率的情况下,MMU20可以在TLB30保存的第一页表缓存以及第二页表缓存中快速查找目标虚拟地址对应的页表项,进而快速确定目标物理地址,提高内存访问效率。
在本申请的一个实施例中,用于表征物理大页的小页的数据是否被迁移的位图存储在第一页表缓存中,在上述步骤702中,若在第二页表缓存中未命中,且在第一页表缓存中命中,则MMU20查询位图,进一步判断目标虚拟地址对应的物理大页内的小页的数据是否被迁移,若是,则指示内存控制器40执行步骤707,否则,指示内存控制器40执行步骤708。表2为包含位图的第一页表缓存的示意。通过表2可以确定,虚拟大页b对应的物理大页B,且物理大页B的的第一个小页、第二个小页、第四个小页未被迁移,而第三个小页已被迁移。
虚拟大页号 物理大页号 迁移标识序列
b B 0010….
表2
另外,本发明实施例提供的技术方案可以与高速缓存(cache)技术结合,MMU20在确定出目标虚拟地址对应的物理大页地址或物理小页地址后,计算机系统可以先在cache中查找物理大页地址或物理小页地址所对应的数据,只有在cache未命中后,才由内存控制器根据物理大页地址或物理小页地址进行内存访问。
可选的,本申请实施例中,在需要将PCM51的页物理大页的小页的数据迁移至DRAM52时,如果DRAM52中没有空闲的存储空间,内存控制器40根据预设的页面置换算法将DRAM52中的一个或多个物理小页迁移回PCM51之中。其中,预设的页面置换算法可以有多种实现方式,包括但不限于以下算法:
(1)先进先出算法,即将最早迁移至DRAM52的数据迁移回PCM51;
(2)最久未被使用(not recently used,NRU)算法,即将DRAM52中最久没被访问的数据迁移回PCM51之中;
(3)最近最少使用(least recently used,LRU)算法,即将DRAM52中最近访问频率最低的数据迁移回PCM51之中;
(4)最优更换算法,即将DRAM52中不会再被访问的数据迁移回PCM51,或者,将将DRAM52中最久不会被访问的数据迁移回PCM51。
预设页面置换算法还可以包括时钟算法、第二次机会算法等,请参照现有技术,本申请实施例在此不一一详述。
上述技术方案中,在DRAM52没有空闲空间存放由PCM51迁移来的数据时,可以根据预设的各种页面置换算法将DRAM52中已存储的数据迁移回PCM51,进而使得DRAM52始终能够容纳最近写过热的小页的数据,提高DRAM52的存储空间的利用率。
可选的,本申请实施例中,若计算机系统设置有表征PCM51的物理大页的小页的数据 是否被迁移的位图,则在将DRAM52中从PCM51的小页中迁移来的数据迁移回该小页之后,从该位图中删除表征该小页的数据被迁移的标识。
继续参照图1,本申请实施例提供一种计算机系统,包括:处理器10、MMU20、内存控制器40以及混合内存50。处理器10可以通过总线与MMU20、内存控制器40以及混合内存50通信。混合内存50包含有第一存储器以及第二存储器,第一存储器为非易失性存储器,例如图1中的PCM51,第二存储器为易失性存储器,例如图1中的DRAM52。
MMU20用于:
接收处理器10发送的第一访问请求,访问请求中携带有第一虚拟地址;
根据第一页表缓存将第一虚拟地址转换为第一物理地址,第一物理地址为第一存储器中的第一大页的物理地址,其中,第一页表缓存用于记录虚拟地址与第一存储器的大页的物理地址的映射关系,第一存储器的大页包括多个小页;
内存控制器40用于根据第一物理地址访问第一存储器,并在根据第一物理地址访问第一存储器的过程中,当确定第一大页中的第一小页的数据被迁移到第二存储器中时,根据第一小页中存储的第二物理地址访问第二存储器,其中,第二物理地址为第二存储器中的第二小页的物理地址,第二小页中存储有从第一小页中迁移出的数据,第二存储器中包括多个小页,第二存储器中的小页的大小小于第一存储器中的大页的大小。
作为一种可选的方式,内存控制器40还用于:
当第一小页中的访问次数超过设置阈值时,将第一小页中的数据迁移至第二小页中;
在第一小页中存储第二小页的第二物理地址。
作为一种可选的方式,内存控制器40还用于:
在将第一小页的数据迁移至第二小页中之后,在设置的位图中设置第一标识,第一标识用于指示第一小页中的数据已经被迁移。
作为一种可选的方式,计算机系统中还包括第二页表缓存,第二页表缓存用于记录虚拟地址与第二存储器的小页的物理地址的映射关系;
处理器10还用于:
在第一小页中的数据被迁移至第二小页中之后,在第二页表缓存中添加第二虚拟地址与第二物理地址的映射关系。
作为一种可选的方式,MMU20还用于:
接收处理器发送的第二访问请求,第二访问请求包含有第二虚拟地址;
根据第二页表缓存获得与第二虚拟地址具有映射关系的第二物理地址;
内存控制器40还用于:根据第二物理地址访问第二存储器。
作为一种可选的实施方式,计算机系统还包括TLB30,用于存储第一页表缓存。在一些实施例中,TLB30还用于存储第二页表缓存。
上述处理器10可以是一个处理器元件,也可以是多个处理器元件的统称。例如,该处理器可以是中央处理器(Central Processing Unit,CPU),也可以是特定集成电路(Application Specific Integrated Circuit,ASIC),或者是被配置成实施本发明实施例的一个或多个集成电路,例如:一个或多个微处理器(Digital Signal Processor,DSP),或,一个或者多个现场可编程门阵列(Field Programmable Gate Array,FPGA)。
上述处理器10、MMU20、TLB30以及内存控制器40可以与处理器10集成在一起,也 可以独立于处理器10之外。上述MMU20与TLB30可以集成在一起,也可以为两个独立器件。在TLB30同时存储第一页表缓存以及第二页表缓存的实施例中,TLB30可以为一个TLB器件,也可以为两个TLB器件,在后一种情形中两个TLB器件分别用于存储第一页表缓存、第二页表缓存。上述混合内存50的实现方式在前文介绍图1时已有描述,在此不再重复。
上述计算机系统在内存访问的过程中所执行的动作以及所起的作用,在图2至图5所述的内存访问方法中已有详细描述,在此不再重复。
本申请实施例还提供了一种计算机可读存储介质,用于存储为执行上述处理器10所需执行的计算机软件指令,其包含用于执行上述处理器10所需执行的程序。
图6为本申请实施例提供的内存访问装置的示意图,该内存访问装置应用于在计算机系统中进行内存访问。计算机系统包括混合内存,混合内存包含有第一存储器以及第二存储器,其中,第一存储器为非易失性存储器,第二存储器为易失性存储器。内存访问装置包括:
接收模块801,用于接收第一访问请求,访问请求中携带有第一虚拟地址;
转换模块802,用于根据计算机系统中的第一页表缓存将第一虚拟地址转换为第一物理地址,其中,第一物理地址为第一存储器中的第一大页的物理地址,第一大页中包含有多个小页;
访问模块803,用于在根据第一物理地址访问第一存储器的过程中,当确定第一大页中的第一小页的数据被迁移到第二存储器中时,根据第一小页中存储的第二物理地址访问第二存储器,其中,第二物理地址为第二存储器中的第二小页的物理地址,第二小页中存储有从第一小页中迁移出的数据,其中第二存储器中包括多个小页,第二存储器中的小页的大小小于第一存储器中的大页的大小。
作为一种可选的方式,内存访问装置还包括:
迁移模块804,用于当该第一小页中的访问次数超过设置阈值时,将第一小页中的数据迁移至第二小页中;在第一小页中存储第二小页的第二物理地址。
作为一种可选的方式,内存访问装置还包括:
标识模块805,用于在将第一小页的数据迁移至第二小页中之后,在设置的位图中设置第一标识,第一标识用于指示第一小页中的数据已经被迁移。
作为一种可选的方式,计算机系统还包括第二页表缓存,内存访问装置还包括:
映射模块806,用于在将第一小页中的数据迁移至第二小页中之后,在第二页表缓存中添加第二虚拟地址与第二物理地址的映射关系,第二页表缓存用于记录虚拟地址与第二存储器的小页的物理地址的映射关系。
作为一种可选的方式,接收模块801还用于:接收第二访问请求,第二访问请求包含有第二虚拟地址;根据第二页表缓存获得与第二虚拟地址具有映射关系的第二物理地址;
访问模块803还用于:根据第二物理地址访问第二存储器。
上述内存访问装置的各模块的实现方式可以参照图2至图5所述的内存访问方法中各步骤的实现方式。
本发明实施例还提供一种数据处理的计算机程序产品,包括存储了程序代码的计算机可读存储介质,所述程序代码包括的指令用于执行前述任意一个方法实施例所述的方法流程。本领域普通技术人员可以理解,前述的存储介质包括:U盘、移动硬盘、磁碟、光盘、 随机存储器(Random-Access Memory,RAM)、固态硬盘(Solid State Disk,SSD)或者非易失性存储器(non-volatile memory)等各种可以存储程序代码的非短暂性的(non-transitory)机器可读介质。
本申请是参照根据本申请的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。

Claims (10)

  1. 一种内存访问方法,其特征在于,所述方法应用于包含有混合内存的计算机系统中,所述混合内存包含有第一存储器以及第二存储器,其中,所述第一存储器为非易失性存储器,所述第二存储器为易失性存储器,所述方法包括:
    接收第一访问请求,所述访问请求中携带有第一虚拟地址;
    根据所述计算机系统中的第一页表缓存将所述第一虚拟地址转换为第一物理地址,其中,所述第一物理地址为所述第一存储器中的第一大页的物理地址,所述第一大页中包含有多个小页;
    在根据所述第一物理地址访问所述第一存储器的过程中,当确定所述第一大页中的第一小页的数据被迁移到所述第二存储器中时,根据所述第一小页中存储的第二物理地址访问所述第二存储器,其中,所述第二物理地址为所述所述第二存储器中的第二小页的物理地址,所述第二小页中存储有从所述第一小页中迁移出的数据,所述第二存储器中包括多个小页,所述第二存储器中的小页的大小小于所述第一存储器中的大页的大小。
  2. 根据权利要求1所述的内存访问方法,其特征在于,还包括:
    当所述第一小页中的访问次数超过设置阈值时,将所述第一小页中的数据迁移至所述第二小页中;
    在所述第一小页中存储所述第二小页的所述第二物理地址。
  3. 根据权利要求2所述的内存访问方法,其特征在于,还包括:
    在将所述第一小页的数据迁移至所述第二小页中之后,在设置的位图中设置第一标识,所述第一标识用于指示所述第一小页中的数据已经被迁移。
  4. 根据权利要求2或3所述的内存访问方法,其特征在于,所述计算机系统还包括第二页表缓存,所述将所述第一小页中的数据迁移至所述第二小页中之后,所述方法还包括:
    在所述第二页表缓存中添加第二虚拟地址与所述第二物理地址的映射关系,其中,所述第二页表缓存用于记录虚拟地址与所述第二存储器的小页的物理地址的映射关系。
  5. 根据权利要求4所述的内存访问方法,其特征在于,所述方法还包括:
    接收第二访问请求,所述第二访问请求包含有第二虚拟地址;
    根据所述第二页表缓存获得与所述第二虚拟地址具有映射关系的所述第二物理地址;
    根据所述第二物理地址访问所述第二存储器。
  6. 一种计算机系统,其特征在于,包括处理器、内存管理单元MMU、内存控制器以及混合内存,其中,所述混合内存包含有第一存储器以及第二存储器,所述第一存储器为非易失性存储器,所述第二存储器为易失性存储器;
    所述MMU用于:
    接收所述处理器发送的第一访问请求,所述访问请求中携带有第一虚拟地址;
    根据第一页表缓存将所述第一虚拟地址转换为第一物理地址,所述第一物理地址为所述第一存储器中的第一大页的物理地址,其中,所述第一页表缓存用于记录虚拟地址与所述第一存储器的大页的物理地址的映射关系,所述第一存储器的大页包括多个小页;
    所述内存控制器,用于根据所述第一物理地址访问所述第一存储器,并在根据所述第一物理地址访问所述第一存储器的过程中,当确定所述第一大页中的第一小页的数据被迁移到所述第二存储器中时,根据所述第一小页中存储的第二物理地址访问所述第二存储器,其中,所述第二物理地址为所述所述第二存储器中的第二小页的物理地址,所述第二小页中存储有从所述第一小页中迁移出的数据,所述第二存储器中包括多个小页,所述第二存储器中的小页的大小小于所述第一存储器中的大页的大小。
  7. 根据权利要求6所述的计算机系统,其特征在于,所述内存控制器还用于:
    当所述第一小页中的访问次数超过设置阈值时,将所述第一小页中的数据迁移至所述第二小页中;
    在所述第一小页中存储所述第二小页的所述第二物理地址。
  8. 根据权利要求7所述的计算机系统,其特征在于,所述内存控制器还用于:
    在将所述第一小页的数据迁移至所述第二小页中之后,在设置的位图中设置第一标识,所述第一标识用于指示所述第一小页中的数据已经被迁移。
  9. 根据权利要求7或8所述的计算机系统,其特征在于,所述计算机系统中还包括第二页表缓存,所述第二页表缓存用于记录虚拟地址与所述第二存储器的小页的物理地址的映射关系;
    所述处理器还用于:
    在所述第一小页中的数据被迁移至所述第二小页中之后,在所述第二页表缓存中添加第二虚拟地址与所述第二物理地址的映射关系。
  10. 根据权利要求9所述的计算机系统,其特征在于:
    所述MMU还用于:
    接收所述处理器发送的第二访问请求,所述第二访问请求包含有第二虚拟地址;
    根据所述第二页表缓存获得与所述第二虚拟地址具有映射关系的所述第二物理地址;
    所述内存控制器还用于:根据所述第二物理地址访问所述第二存储器。
PCT/CN2018/084777 2017-04-27 2018-04-27 一种内存访问方法及计算机系统 WO2018196839A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP18790799.3A EP3608788B1 (en) 2017-04-27 2018-04-27 Internal memory access method and computer system
US16/664,757 US20200057729A1 (en) 2017-04-27 2019-10-25 Memory access method and computer system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710289650.6A CN108804350B (zh) 2017-04-27 2017-04-27 一种内存访问方法及计算机系统
CN201710289650.6 2017-04-27

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/664,757 Continuation US20200057729A1 (en) 2017-04-27 2019-10-25 Memory access method and computer system

Publications (1)

Publication Number Publication Date
WO2018196839A1 true WO2018196839A1 (zh) 2018-11-01

Family

ID=63918023

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/084777 WO2018196839A1 (zh) 2017-04-27 2018-04-27 一种内存访问方法及计算机系统

Country Status (4)

Country Link
US (1) US20200057729A1 (zh)
EP (1) EP3608788B1 (zh)
CN (1) CN108804350B (zh)
WO (1) WO2018196839A1 (zh)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10747463B2 (en) * 2017-08-04 2020-08-18 Micron Technology, Inc. Apparatuses and methods for accessing hybrid memory system
CN110046106B (zh) * 2019-03-29 2021-06-29 海光信息技术股份有限公司 一种地址转换方法、地址转换模块及系统
CN110209603B (zh) * 2019-05-31 2021-08-31 龙芯中科技术股份有限公司 地址转换方法、装置、设备及计算机可读存储介质
CN112328354A (zh) * 2019-08-05 2021-02-05 阿里巴巴集团控股有限公司 虚拟机热迁移方法、装置、电子设备及计算机存储介质
KR20210025344A (ko) * 2019-08-27 2021-03-09 에스케이하이닉스 주식회사 이종 메모리를 갖는 메인 메모리 장치, 이를 포함하는 컴퓨터 시스템, 그것의 데이터 관리 방법
CN110543433B (zh) * 2019-08-30 2022-02-11 中国科学院微电子研究所 一种混合内存的数据迁移方法及装置
CN110888821B (zh) * 2019-09-30 2023-10-20 华为技术有限公司 一种内存管理方法及装置
CN111638938B (zh) * 2020-04-23 2024-04-19 龙芯中科技术股份有限公司 虚拟机的迁移方法、装置、电子设备及存储介质
CN113568562A (zh) 2020-04-28 2021-10-29 华为技术有限公司 一种存储系统、内存管理方法和管理节点
US11893276B2 (en) 2020-05-21 2024-02-06 Micron Technology, Inc. Apparatuses and methods for data management in a memory device
CN113742253B (zh) * 2020-05-29 2023-09-01 超聚变数字技术有限公司 存储介质管理方法、装置、设备以及计算机可读存储介质
CN113296685B (zh) * 2020-05-29 2023-12-26 阿里巴巴集团控股有限公司 数据处理方法及装置、计算机可读存储介质
CN111880735B (zh) * 2020-07-24 2023-07-14 北京浪潮数据技术有限公司 一种存储系统中数据迁移方法、装置、设备及存储介质
CN112650603B (zh) * 2020-12-28 2024-02-06 北京天融信网络安全技术有限公司 内存管理方法、装置、电子设备及存储介质
CN112905497B (zh) * 2021-02-20 2022-04-22 迈普通信技术股份有限公司 内存管理方法、装置、电子设备及存储介质
CN113094173B (zh) * 2021-04-02 2022-05-17 烽火通信科技股份有限公司 一种基于dpdk的大页内存动态迁移的方法与装置
CN113076266B (zh) * 2021-06-04 2021-10-29 深圳华云信息系统有限公司 一种内存管理方法、装置、电子设备及存储介质
CN113641490A (zh) * 2021-07-30 2021-11-12 联想(北京)有限公司 数据调度方法、装置
US11922034B2 (en) 2021-09-02 2024-03-05 Samsung Electronics Co., Ltd. Dual mode storage device
CN115904212A (zh) * 2021-09-30 2023-04-04 华为技术有限公司 数据处理的方法、装置、处理器和混合内存系统
CN117149049A (zh) * 2022-05-24 2023-12-01 华为技术有限公司 内存访问热度统计方法、相关装置及设备
CN115359830B (zh) * 2022-07-12 2024-03-08 浙江大学 Scm介质存储模块的读方法写方法,以及存储控制器
CN117917649A (zh) * 2022-10-20 2024-04-23 华为技术有限公司 数据处理方法、装置、芯片以及计算机可读存储介质
CN116644006B (zh) * 2023-07-27 2023-11-03 浪潮电子信息产业股份有限公司 一种内存页面管理方法、系统、装置、设备及计算机介质
CN117234432B (zh) * 2023-11-14 2024-02-23 苏州元脑智能科技有限公司 一种混合内存系统的管理方法、管理装置、设备及介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5479627A (en) * 1993-09-08 1995-12-26 Sun Microsystems, Inc. Virtual address to physical address translation cache that supports multiple page sizes
CN103198028A (zh) * 2013-03-18 2013-07-10 华为技术有限公司 一种内存数据迁移方法、装置及系统
CN106560798A (zh) * 2015-09-30 2017-04-12 杭州华为数字技术有限公司 一种内存访问方法、装置及计算机系统

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9535831B2 (en) * 2014-01-10 2017-01-03 Advanced Micro Devices, Inc. Page migration in a 3D stacked hybrid memory
US10846279B2 (en) * 2015-01-29 2020-11-24 Hewlett Packard Enterprise Development Lp Transactional key-value store

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5479627A (en) * 1993-09-08 1995-12-26 Sun Microsystems, Inc. Virtual address to physical address translation cache that supports multiple page sizes
CN103198028A (zh) * 2013-03-18 2013-07-10 华为技术有限公司 一种内存数据迁移方法、装置及系统
CN106560798A (zh) * 2015-09-30 2017-04-12 杭州华为数字技术有限公司 一种内存访问方法、装置及计算机系统

Also Published As

Publication number Publication date
EP3608788A4 (en) 2020-04-22
CN108804350B (zh) 2020-02-21
EP3608788A1 (en) 2020-02-12
US20200057729A1 (en) 2020-02-20
EP3608788B1 (en) 2023-09-13
CN108804350A (zh) 2018-11-13

Similar Documents

Publication Publication Date Title
WO2018196839A1 (zh) 一种内存访问方法及计算机系统
US10067684B2 (en) File access method and apparatus, and storage device
US11467955B2 (en) Memory system and method for controlling nonvolatile memory
TWI661301B (zh) 記憶體系統及控制非揮發性記憶體之控制方法
US10761731B2 (en) Array controller, solid state disk, and method for controlling solid state disk to write data
CN109726139B (zh) 存储器系统及控制方法
JP6224253B2 (ja) フラッシュメモリ内に記憶されたデータの推測的プリフェッチ
CN111061655B (zh) 存储设备的地址转换方法与设备
US11237980B2 (en) File page table management technology
CN107870867B (zh) 32位cpu访问大于4gb内存空间的方法与装置
US20190042415A1 (en) Storage model for a computer system having persistent system memory
CN114546898A (zh) 一种tlb管理方法、装置、设备及存储介质
CN110968527B (zh) Ftl提供的缓存
CN110362509B (zh) 统一地址转换方法与统一地址空间
JP7204020B2 (ja) 制御方法
US11409665B1 (en) Partial logical-to-physical (L2P) address translation table for multiple namespaces
JP7167295B2 (ja) メモリシステムおよび制御方法
US11835992B2 (en) Hybrid memory system interface
CN117762323A (zh) 一种数据处理方法及装置
JP2023021450A (ja) メモリシステム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18790799

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018790799

Country of ref document: EP

Effective date: 20191107