WO2017107162A1 - 一种异构混合内存组件、系统及存储方法 - Google Patents

一种异构混合内存组件、系统及存储方法 Download PDF

Info

Publication number
WO2017107162A1
WO2017107162A1 PCT/CN2015/098816 CN2015098816W WO2017107162A1 WO 2017107162 A1 WO2017107162 A1 WO 2017107162A1 CN 2015098816 W CN2015098816 W CN 2015098816W WO 2017107162 A1 WO2017107162 A1 WO 2017107162A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
buffer
processor
read
storage
Prior art date
Application number
PCT/CN2015/098816
Other languages
English (en)
French (fr)
Inventor
庞观士
薛英仪
陈志列
沈航
徐成泽
Original Assignee
研祥智能科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 研祥智能科技股份有限公司 filed Critical 研祥智能科技股份有限公司
Priority to PCT/CN2015/098816 priority Critical patent/WO2017107162A1/zh
Publication of WO2017107162A1 publication Critical patent/WO2017107162A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems

Definitions

  • the present invention relates to the field of computer technologies, and in particular, to a heterogeneous hybrid memory component, system, and storage method.
  • the traditional server adopts a two-level storage mechanism.
  • the CPU first searches for the required data from the internal storage unit, and then retrieves the data from the external storage (hard disk) when the data is not in the memory.
  • the speed difference between CPU and memory is solved by multi-level cache, but the speed difference between memory and external memory is getting larger and larger, reaching 100,000 times.
  • frequent access to external memory will result in a significant drop in overall system performance, which becomes a bottleneck in system performance and limits the access speed of data.
  • RAID 0 is also known as Stripe or Striping, which represents the highest storage performance of all RAID levels.
  • the IO bus accesses the external storage device, which determines that the rate increase is conditional in this way and is limited by the IO access rate.
  • NVDIMM is a memory module specification that integrates a DRAM+non-volatile memory chip, and can still save full memory data when the power is completely turned off
  • the server can only access the normal memory (DRAM) part during normal operation.
  • DRAM normal memory
  • the NVDIMM will back up the DRM data to the Flash, and restore the power supply next time.
  • the system will restore the data from the Flash to the DRAM, and the entire system will return to the state before the power was cut off.
  • the NVDIMM method improves the security of data, it needs to be backed up by Flash of the same capacity as DRAM. Flash is invisible under the system and can be used only when power is off and restored, resulting in low utilization of storage space. At the same time, it takes a certain amount of time to copy data from DRAM to Flash. The larger the DRAM capacity, the longer the copying time will be. The more backup power is needed to complete the copying, which determines that the memory capacity cannot be achieved by using NVDIMM. Very large, limiting the technology is not suitable for the promotion of big data servers.
  • the technical problem to be solved by the present invention is to provide a different bottleneck for external storage large data access in the conventional server architecture in the prior art, and in the case of abnormal power failure and crash, which cannot effectively protect the memory data. Construct a mix of memory components, systems, and storage methods.
  • the technical solution adopted by the present invention to solve the technical problem is: on the one hand, constructing a heterogeneous hybrid memory component, comprising a memory controller, a memory cell array and a buffer connected to the processor; wherein
  • the memory controller is configured to receive a write/read request of the processor, and detect, according to address information in the write/read request, a unit space corresponding to a page accessed by the processor, and control data from the processing Write to the memory cell array through the buffer, or control data from the memory cell array Reading out to the processor through the buffer;
  • the storage unit array is configured to store the written/read data according to the first storage type and in a plurality of pages;
  • the buffer is configured to store the written/read data according to the second storage type and the plurality of unit spaces corresponding to the plurality of pages, where the read/write rate of the second storage mode is greater than the first Read and write rate of storage mode.
  • the memory controller includes a data channel, a processor interface, an address storage module, a cache module, a control interface, and a management interface;
  • the data channel is configured to control storage of the address information and the data, writing and/or reading of the data
  • the processor interface is coupled to the processor for receiving a write/read request of the processor, writing data from the processor, and reading data to the processor;
  • the address storage module is configured to store address information in the write/read request
  • the cache module is configured to determine an idle state of the device according to the address information, and store the written/read data;
  • the control interface is connected to the buffer, and is configured to detect whether a unit space corresponding to a page accessed by the processor exists in the buffer, and if yes, write/read the data into the buffer If not, the corresponding unit space is loaded into the buffer according to the accessed page, and the data is written/read out of the buffer;
  • the management interface is coupled to the storage unit array for writing/reading the data to the storage unit array.
  • the memory controller further includes a buffer page State storage module and write/read buffer module;
  • the buffer page state storage module is configured to store usage of a unit space corresponding to the page of the buffer
  • the write/read buffer module is configured to buffer the read/write data between the control interface and the management interface.
  • the buffer is further configured to, when the unit space is all used, call the unit space with the lowest frequency of use according to the usage to the storage unit array. In the page, the corresponding unit space is loaded from the page of the storage unit array.
  • the management interface is connected to the memory cell array using a plurality of data channels.
  • the memory controller is further configured to store modification information of a unit space of the buffer into the storage unit array.
  • a heterogeneous hybrid memory system including a processor and a heterogeneous hybrid memory component as described above.
  • a heterogeneous hybrid memory storage method which uses the above heterogeneous hybrid memory system, including the steps of writing data and reading data;
  • steps of writing data include:
  • the steps for reading out data include:
  • the memory controller receives a read request from the processor, according to address information in the read request Detecting a unit space corresponding to a page accessed by the processor, and controlling data is read out from the storage unit array through the buffer to the processor;
  • the storage unit array stores the written/read data according to the first storage type and in a plurality of pages; the buffer is configured according to the second storage type and sets a plurality of unit spaces corresponding to the plurality of pages. Data written/read; the read/write rate of the second storage mode is greater than the read/write rate of the first storage mode.
  • the step of writing data includes the following sub-steps:
  • the processor issues a request for writing data to the heterogeneous hybrid memory component.
  • the heterogeneous hybrid memory component is connected to a register of the processor according to the address information.
  • S13 detecting whether a unit space corresponding to the page accessed by the processor exists in the buffer, and if yes, writing the data to a corresponding unit space in the buffer, and if not, according to the accessed page Another unit space is allocated in the buffer and transferred to the corresponding other unit space, and the data is written into the other unit space.
  • the step of reading data includes the following sub-steps:
  • the processor sends a request for reading data to the heterogeneous hybrid memory component.
  • the heterogeneous hybrid memory component is connected to a register of the processor according to the address information.
  • the heterogeneous hybrid memory component, system and storage method disclosed above have the following beneficial effects: changing the data access architecture of the traditional server, and upgrading the external storage device to the level of internal storage, the external storage device and the memory sharing are equal Data bandwidth, and no longer through IO access, greatly improve the access efficiency of external memory, with the non-volatile characteristics of external storage devices, the CPU data is protected, that is, the power is not lost, after the power is restored Continue to work.
  • FIG. 1 is a structural block diagram of a heterogeneous hybrid memory component according to an embodiment of the present invention
  • FIG. 2 is a structural block diagram of a heterogeneous hybrid memory component according to another embodiment of the present invention.
  • FIG. 3 is a structural block diagram of a memory controller provided by the present invention.
  • FIG. 4 is a structural block diagram of a heterogeneous hybrid memory system provided by the present invention.
  • FIG. 5 is a flowchart of a heterogeneous hybrid memory storage method provided by the present invention.
  • Figure 7 is a flow chart of reading data provided by the present invention.
  • the invention provides a heterogeneous hybrid memory component, system and storage method, wherein the system comprises a CPU, a peripheral expansion bus, an IO interface and various heterogeneous memory modules (also called a "memory group" ""), the core of the system is the design of NVM (Non-Volatile Random Access Memory) memory module and the hybrid management of NVM/DRAM (Dynamic Random Access Memory) memory modules.
  • the core of the NVM memory module is to design a dedicated NVM controller to connect and manage the interface communication between the CPU and the interface communication of the NVM.
  • the heterogeneous hybrid memory component, system and storage method proposed by the present invention can be applied to an industrial server system. in.
  • FIG. 1 is a structural block diagram of a heterogeneous hybrid memory component 100 including a memory controller 1, a memory cell array 2, and a buffer connected to a processor according to an embodiment of the present invention. District 3.
  • the memory controller 1 is configured to receive a write/read request of the processor, detect a unit space corresponding to a page accessed by the processor according to the address information in the write/read request, and control data from the processing
  • the buffer is written to the memory cell array 2 through the buffer 3, or control data is read from the memory cell array 2 through the buffer 3 to the processor.
  • the memory cell array 2 is configured to store the written/read data in a manner of a plurality of pages in accordance with the first storage type.
  • the memory cell array 2 is preferably an NVM array 2, and the corresponding memory controller 1 is an NVM controller.
  • the buffer 3 is configured to store the data written/readed according to the second storage type and the plurality of unit spaces corresponding to the plurality of pages, where the read/write rate of the second storage mode is greater than the first The read/write rate of a storage method. Since the memory controller 1 integrated in the CPU can only support the transfer protocol of the DRAM, the buffer 3 is preferably composed of DDR3 DRAM particles.
  • the read/write rate of the NVM is still lower than that of the DRAM, that is, the read/write rate of the second storage mode (corresponding to the DRAM) is greater than the read/write rate of the first storage mode (corresponding to the NVM), therefore, Needed in the NVM memory module To design a complete subsystem, the read and write of the NVM meets the requirements of the CPU memory controller 1.
  • FIG. 2 is a structural block diagram of a heterogeneous hybrid memory component 100 according to another embodiment of the present invention.
  • the embodiment is different from the previous embodiment in that the embodiment embodies heterogeneous mixed memory.
  • Components of component 100 are different from the previous embodiment in that the embodiment embodies heterogeneous mixed memory.
  • the memory controller 1 is an NVM controller
  • the memory cell array 2 is an NVM array 2
  • the buffer 3 is an NVM buffer 3, and is preferably composed of DDR3 DRAM particles
  • the device 1 is connected to the DDR3 DIMM interface of the CPU, and the backup power supply is provided for the NVM controller, and the SPD (Serial Presence Detect), that is, the configuration information of the memory module is provided.
  • the SPD Serial Presence Detect
  • FIG. 3 is a structural block diagram of a memory controller 1 according to the present invention.
  • the memory controller 1 adopts an NVM controller as shown in FIG. 2, and the NVM controller includes a DRAM memory interface connected to the CPU (ie, processing).
  • the interface 3 ), the buffer 3 connected to the NVM buffer 3, the control interface 15 (ie, the control interface 15), the NVM management interface 16 (ie, the management interface 16) connected to the NVM array 2, and the connection between each other The data channel and control logic module of the relationship.
  • the NVM controller reads and writes data in a "page" manner, and the NVM buffer 3 is divided into a plurality of virtual unit spaces, each of which stores one page of data.
  • the memory controller 1 includes a data channel 11, a processor interface 12, an address storage module 13, a cache module 14, a control interface 15, a management interface 16, a buffer page state storage module 17, and write/ The buffer module 18 is read.
  • the data channel 11 is configured to control storage of the address information and the data, writing and/or reading of the data.
  • the processor interface 12 is coupled to the processor for receiving a write/read request of the processor, Data is written from the processor and data is read to the processor; that is, the DRAM memory interface of FIG.
  • the address storage module 13 is configured to store address information in the write/read request; that is, an address/read/write status information box in FIG.
  • the cache module 14 is configured to determine an idle state of the user according to the address information, and store the written/read data; that is, the Cache box in FIG.
  • the control interface 15 is connected to the buffer 3 for managing data read and write of the buffer 3. Specifically, it is used to detect whether a unit space corresponding to the page accessed by the processor exists in the buffer. In the area 3, if yes, the data is written/read out the corresponding unit space in the buffer 3, and if not, a new unit space is allocated in the buffer 3 according to the accessed page (ie, another One unit space) is transferred to this new unit space, and the data is written/read out to the buffer 3; that is, the buffer 3 in FIG. 3 controls the interface 15.
  • the control interface 15 is designed as a dual channel controller interface, which can communicate with the front end DRAM memory data and the back end NVM data (ie, the memory cell array 2) to serve as a bridge.
  • NVM buffer 3 The working efficiency of NVM buffer 3 affects the performance of the whole system.
  • the front-end address/read-write status information register, Cache, and buffer 3 page status table all use the high-speed static RAM inside the controller, and the NVM buffer 3 It takes less time, but the back-end NVM is much slower than the read/write speed. For this reason, the NVM write buffer and read buffer are designed in the middle.
  • the buffer 3 control interface 15 only needs to put the page data into the NVM write buffer 18, and the subsequent work is completed by the NVM management interface 16, and does not occupy the time of the buffer 3 control interface 15.
  • the controller notifies the NVM management interface 16 that the NVM management interface 16 pre-reads the data and places it in the NVM read buffer 18 and notifies the buffer 3 control interface 15 to fetch the page data.
  • the management interface 16 is connected to the storage unit array 2 for writing/reading the data The memory cell array 2. That is, the NVM management interface 16 in FIG.
  • the management interface 16 is connected to the memory cell array 2 using a plurality of data channels.
  • Another task of the NVM management interface 16 is to increase the read and write rates of the NVM module.
  • NVM refers to all types of non-volatile random access memory.
  • mainstream NVM devices include: phase change memory (PCM), resistive memory (RRAM), ferroelectric memory (FRAM), etc., and techniques are used in this application. Relatively mature phase change memory (PCM), but its read and write speed is still much lower than the current common DDR3 DRAM memory device.
  • the NVM management interface 16 is also very management method for the NVM array 2.
  • the NVM management interface 16 uses a multi-channel parallel transmission management mode for the NVM array 2 to distribute page data to multiple independent channels for reading and writing. For example, in this case, the NVM array 2 is read and written simultaneously with 4 channels, and the read/write rate can be increased by 4 times compared with 1 channel.
  • the buffer page state storage module 17 is configured to store the usage of the unit space corresponding to the page of the buffer 3 in the data channel; the buffer page state storage module 17 is the buffer page state table in FIG. Box.
  • the buffer page state storage module 17 can list the storage condition of the buffer 3 as a table, which is a special storage space for recording the page usage of the NVM buffer 3, and matching with the controller management page. Out.
  • the write/read buffer module 18 is configured to buffer the read/write data between the control interface 15 and the management interface 16. That is, the NVM write buffer and the NVM read buffer in FIG.
  • the buffer 3 is further configured to, when the unit space is all used, call the unit space with the lowest frequency of use to the page corresponding to the storage unit array 2 according to the usage, in the storage from the storage.
  • the corresponding unit space is loaded into the page of the cell array 2. That is, the memory controller 1 will take the data from
  • the NVM is called into the NVM buffer 3 to follow the following principle: when the NVM buffer 3 has free space, it takes up idle space; when the NVM buffer 3 is all used, the controller first calls the page with the lowest frequency (stored Into the NVM), and then into the corresponding page.
  • the memory controller 1 is further configured to store the modification information of the unit space of the buffer 3 into the storage unit array 2.
  • a data save instruction is designed, by which the page modified by the NVM buffer 3 in the NVM memory module is actively written into the NVM.
  • the power-off information is intercepted, and the modified page of the NVM buffer 3 in the NVM memory module is automatically written into the NVM under the control of the NVM controller to achieve real-time protection of important data.
  • FIG. 4 is a structural block diagram of a heterogeneous hybrid memory system 200 according to the present invention. Another aspect of the present invention further provides a heterogeneous hybrid memory system 200, including a processor and heterogeneous hybrids as described above. Memory component 100.
  • the heterogeneous mixed memory component corresponding to the NVM memory module
  • the processor corresponding to the CPU0-CPU3
  • the DRAM memory module, the expansion bus, and the IO interface are also included.
  • FIG. 5 is a flowchart of a heterogeneous hybrid memory storage method according to the present invention.
  • a further aspect of the present invention further provides a heterogeneous hybrid memory storage method, which uses the heterogeneous hybrid memory system 200 described above. , including the steps of writing data and reading data;
  • steps of writing data include:
  • the memory controller 1 receives a write request of the processor, detects a unit space corresponding to a page accessed by the processor according to address information in the write request, and controls data from the processor through the buffer 3 is written to the memory cell array 2.
  • the steps for reading out data include:
  • the memory controller 1 receives a read request from the processor, and detects a unit space corresponding to a page accessed by the processor according to the address information in the read request, and the control data passes through the storage unit array 2 through the The buffer 3 is read out to the processor.
  • the memory cell array 2 stores the written/read data in a plurality of pages according to the first storage type; the buffer 3 is in accordance with the second storage type and sets a plurality of units corresponding to the plurality of pages The space stores the data written/read; the read/write rate of the second storage mode is greater than the read/write rate of the first storage mode.
  • FIG. 6 is a flowchart of writing data provided by the present invention.
  • the step of writing data includes the following sub-steps:
  • Steps S101-S103 are execution flow of the processor, and steps S111-S118 are execution flow of the heterogeneous hybrid memory component 100.
  • step S101 the application layer of the CPU makes a request to write NVM data, and proceeds to step S111;
  • step S102 the drive layer of the CPU writes data to the NVM memory module, and proceeds to step S112;
  • step S111 whether the Cache is idle, if yes, go to step S102, if no, go to step S101.
  • the NVM memory module receives the address/read/write information.
  • step S114 Determine whether the page where the data is located is in the buffer 3. If yes, go to step S118, if no, go to step S115.
  • step S115 whether the buffer 3 is idle, if yes, go to step S118, if no, go to step S116.
  • step of writing data can be summarized as the following sub-steps:
  • the processor sends a request for writing data to the heterogeneous hybrid memory component 100.
  • the heterogeneous hybrid memory component 100 determines whether the Cache (ie, the cache) is idle. In the case of the cache idle state, the controller places the DDR3 cycle information in the address/read/write status information related register.
  • FIG. 7 is a flowchart of reading data provided by the present invention, and the step of reading data includes the following sub-steps:
  • S201-S205 is an execution flow of the processor, and steps S211-S216 are execution flow of the heterogeneous hybrid memory component 100.
  • the application layer proposes to read the NVM data request, that is, the heterogeneous hybrid memory component 100 performs steps S211-S216, and the processor performs steps S202-S205.
  • the driver layer reads data from the NVM memory module.
  • step S203 Determine whether the data is valid. If yes, go to step S204, if no, go to step S202.
  • the S211 and the NVM memory module receive the address/read/write information.
  • step S213. Determine whether the access page is in the buffer 3, and if yes, go to step S215, if no, go to step S214.
  • buffer 3 data is transferred into the Cache.
  • the return data flag is set to be valid.
  • step of reading out data can be summarized as the following sub-steps:
  • the processor sends a request for reading data to the heterogeneous hybrid memory component 100.
  • the memory controller 1 places the DDR3 cycle information in an address/read/write status information related register.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

一种异构混合内存组件、系统及存储方法,该异构混合内存组件(100)包括:内存控制器(1)用于接收处理器的写/读请求,根据写/读请求中的地址信息检测处理器访问的页面所对应的单位空间,控制数据从处理器通过缓冲区(3)写入至存储单元阵列(2),或控制数据从存储单元阵列(2)通过缓冲区(3)读出至处理器;存储单元阵列(2),用于按照第一存储类型并以多个页面的方式存储写入/读出的数据;缓冲区(3),用于按照第二存储类型并设置对应于多个页面的多个单位空间存储写入/读出的数据,第二存储方式的读写速率大于第一存储方式的读写速率。上述内存组件、系统及存储方法使得外部存储设备与内存共享同等的数据带宽,而不再通过IO访问,大幅度提升外存的访问效率。

Description

一种异构混合内存组件、系统及存储方法 技术领域
本发明涉及计算机技术领域,尤其涉及一种异构混合内存组件、系统及存储方法。
背景技术
传统的服务器采用两级存储机制,CPU首先从内部存储存单元中寻找所需的数据,当数据不在内存中时再从外部存储(硬盘)中调取数据。CPU和内存之间的速度差别通过多级缓存得到了解决,但内存与外存之间的速度差距越来越大,达到了10万倍。在大数据处理场合,频繁地访问外存会导致整个系统性能大幅度下降,成为系统性能的瓶颈,制约了数据的访问速度。
现有的技术采用RAID 0(RAID 0又称为Stripe或Striping,代表了所有RAID级别中最高的存储性能。)方式来提升外部存储(硬盘)的读写速率,也就是通过多个外部存储设备并行读写的方式来提升整体速率,但服务器的基本数据存取架构并没有改变。比如,用两块硬盘来组成RAID 0,那么理论速率能够提升到单块硬盘的两倍,但实际速率会低于这个数值。这是因为,采用RAID0方式来提升外部存储(硬盘)的读写速率的方式并没有改变传统服务器平台的存取架构,在这种方式下IO访问瓶颈仍然存在,CPU仍然要通过速率相对较低的IO总线来访问外部存储设备,这就决定了采用这种方式对速率的提升是有条件的,会受到IO访问速率的限制。
现有技术中采用NVDIMM(NVDIMM是在一种集成了DRAM+非易失性内存芯片的内存条规格,能够在完全断电的时候依然保存完整内存数据)方式提高数据的安全性,即在服务器普通内存的基础上加上等容量或更大容量的Flash,平时正常工作时服务器只能访问普通内存(DRAM)的部分,在服务器掉电瞬间NVDIMM会将DRM数据备份到Flash中,下次恢复供电时,系统又会从Flash中将数据恢复到DRAM里,整个系统恢复到断电前的状态。NVDIMM的方式虽然提高了数据的安全性,但是需要与DRAM同等容量的Flash做备份,Flash在系统下不可见,仅在掉电和恢复时才可以使用,导致存储空间利用率很低。同时,从DRAM复制数据到Flash需要一定的时间,DRAM容量越大,复制时间会越长,为完成复制所需要的备用电源也就越大,这就决定了采用NVDIMM的方式内存容量无法做到很大,限制了这一技术不适合在大数据服务器的推广应用。
发明内容
本发明要解决的技术问题在于,针对上述现有技术中传统服务器架构对外部存储大数据访问的瓶颈,以及在异常断电、死机情况下,无法有效地保护内存数据的问题,提供一种异构混合内存组件、系统及存储方法。
本发明解决其技术问题所采用的技术方案是:一方面,构造一种异构混合内存组件,包括连接于处理器的内存控制器、存储单元阵列及缓冲区;其中,
所述内存控制器,用于接收所述处理器的写/读请求,根据所述写/读请求中的地址信息检测所述处理器访问的页面所对应的单位空间,控制数据从所述处理器通过所述缓冲区写入至所述存储单元阵列,或控制数据从所述存储单元阵 列通过所述缓冲区读出至所述处理器;
所述存储单元阵列,用于按照第一存储类型并以多个页面的方式存储写入/读出的数据;
所述缓冲区,用于按照第二存储类型并设置对应于所述多个页面的多个单位空间存储写入/读出的数据,所述第二存储方式的读写速率大于所述第一存储方式的读写速率。
在本发明所述的异构混合内存组件中,所述内存控制器包括数据通道、处理器接口、地址存储模块、缓存模块、控制接口以及管理接口;其中,
所述数据通道,用于控制所述地址信息和所述数据的存储、所述数据的写入和/或读出;
所述处理器接口,连接于所述处理器,用于接收所述处理器的写/读请求,从所述处理器写入数据,读出数据至所述处理器;
所述地址存储模块,用于存储所述写/读请求中的地址信息;
所述缓存模块,用于依据所述地址信息判断自身的空闲状态,并存储所写入/读出的数据;
所述控制接口,连接于所述缓冲区,用于检测所述处理器访问的页面对应的单位空间是否存在于所述缓冲区中,若是,将所述数据写入/读出所述缓冲区,若否,则依据所访问的页面于所述缓冲区中调入对应的单位空间,并将所述数据写入/读出所述缓冲区;
所述管理接口,连接于所述存储单元阵列,用于将所述数据写入/读出所述存储单元阵列。
在本发明所述的异构混合内存组件中,所述内存控制器还包括缓冲区页面 状态存储模块及写/读缓冲模块;其中,
所述缓冲区页面状态存储模块,用于存储所述缓冲区的所述页面对应的单位空间的使用情况;
所述写/读缓冲模块,用于在所述控制接口与所述管理接口之间缓冲所读入/写出的数据。
在本发明所述的异构混合内存组件中,所述缓冲区还用于在所述单位空间全部被使用时,根据所述使用情况调出使用频率最低的单位空间至所述存储单元阵列对应的页面中,在从所述存储单元阵列的页面中调入对应的单位空间。
在本发明所述的异构混合内存组件中,所述管理接口采用多个数据通道连接至所述存储单元阵列。
在本发明所述的异构混合内存组件中,所述内存控制器还用于将所述缓冲区的单位空间的修改信息存入所述存储单元阵列中。
一方面,提供一种异构混合内存系统,包括处理器以及如上述的异构混合内存组件。
一方面,提供一种异构混合内存存储方法,该方法采用上述异构混合内存系统,包括写入数据及读出数据的步骤;
其中,写入数据的步骤包括:
所述内存控制器接收所述处理器的写请求,根据所述写请求中的地址信息检测所述处理器访问的页面所对应的单位空间,控制数据从所述处理器通过所述缓冲区写入至所述存储单元阵列;
读出数据的步骤包括:
所述内存控制器接收所述处理器的读请求,根据所述读请求中的地址信息 检测所述处理器访问的页面所对应的单位空间,控制数据从所述存储单元阵列通过所述缓冲区读出至所述处理器;
所述存储单元阵列按照第一存储类型并以多个页面的方式存储写入/读出的数据;所述缓冲区按照第二存储类型并设置对应于所述多个页面的多个单位空间存储写入/读出的数据;所述第二存储方式的读写速率大于所述第一存储方式的读写速率。
在本发明所述的异构混合内存存储方法中,所述写入数据的步骤包括以下子步骤:
S11、所述处理器向所述异构混合内存组件发出写入数据的请求。
S12、所述异构混合内存组件根据所述地址信息连接至所述处理器的寄存器中。
S13、检测所述处理器访问的页面对应的单位空间是否存在于所述缓冲区中,若是,将所述数据写入所述缓冲区中对应的单位空间,若否,则依据所访问的页面于所述缓冲区中分配另一单位空间,并调入到对应的所述另一单位空间,并将所述数据写入所述另一单位空间。
在本发明所述的异构混合内存存储方法中,所述读出数据的步骤包括以下子步骤:
S21、所述处理器向所述异构混合内存组件发出读出数据的请求。
S22、所述异构混合内存组件根据所述地址信息连接至所述处理器的寄存器中。
S23、检测所述处理器访问的页面对应的单位空间是否存在于所述缓冲区中,若是,将所述数据读出所述缓冲区,若否,则依据所访问的页面于所述缓冲区 中调入对应的单位空间,并将所述数据读出所述缓冲区。
上述公开的一种异构混合内存组件、系统及存储方法具有以下有益效果:改变传统服务器的数据存取架构,将外部存储设备提升到与内部存储平级的层面,外部存储设备与内存共享同等的数据带宽,而不再通过IO访问,大幅度提升外存的访问效率,借助外部存储设备的非易失性的特点,对CPU数据进行保护,即断电不会丢失,断电恢复后可接续工作。
附图说明
图1为本发明一实施例提供的一种异构混合内存组件的结构框图;
图2为本发明另一实施例提供的一种异构混合内存组件的结构框图;
图3为本发明提供的内存控制器的结构框图;
图4为本发明提供的一种异构混合内存系统的结构框图;
图5为本发明提供的一种异构混合内存存储方法的流程图;
图6为本发明提供的写入数据的流程图;
图7为本发明提供的读出数据的流程图。
具体实施方式
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本发明,并不用于限定本发明。
本发明提供了一种异构混合内存组件、系统及存储方法,其中,该系统包含了CPU、外围的扩展总线、IO接口以及各种异构的内存模块(也称″内存组 件″),该系统的核心在于NVM(Non-Volatile Random Access Memory,非易失性随机访问存储器)内存模块的设计和NVM/DRAM(Dynamic Random Access Memory,动态随机访问存储器)内存模块的混合管理。而NVM内存模块的核心在于设计一个专用的NVM控制器来连接和管理CPU的接口通信和NVM的接口通信。本发明所提出的异构混合内存组件、系统及存储方法可应用于工业服务器系统中。
参见图1,图1为本发明一实施例提供的一种异构混合内存组件100的结构框图,该异构混合内存组件100包括连接于处理器的内存控制器1、存储单元阵列2及缓冲区3。
所述内存控制器1用于接收所述处理器的写/读请求,根据所述写/读请求中的地址信息检测所述处理器访问的页面所对应的单位空间,控制数据从所述处理器通过所述缓冲区3写入至所述存储单元阵列2,或控制数据从所述存储单元阵列2通过所述缓冲区3读出至所述处理器。
所述存储单元阵列2用于按照第一存储类型并以多个页面的方式存储写入/读出的数据。存储单元阵列2优选为NVM阵列2,那么相应的内存控制器1则为NVM控制器。
所述缓冲区3,用于按照第二存储类型并设置对应于所述多个页面的多个单位空间存储写入/读出的数据,所述第二存储方式的读写速率大于所述第一存储方式的读写速率。由于CPU中集成的内存控制器1只能支持DRAM的传输协议,故缓冲区3优选由DDR3 DRAM颗粒构成。而目前的技术条件下,NVM的读写速率仍然低于DRAM,即第二存储方式(对应于DRAM)的读写速率大于所述第一存储方式(对应于NVM)的读写速率,因此,在NVM内存模块中需 要设计一套完整的子系统使NVM的读写符合CPU内存控制器1的要求。
参见图2,图2为本发明另一实施例提供的一种异构混合内存组件100的结构框图,该实施例不同于上一实施例之处在于,该实施例具体化了异构混合内存组件100的各组成部分。
该实施例中,所述内存控制器1为NVM控制器,所述存储单元阵列2为NVM阵列2,所述缓冲区3为NVM缓冲区3,并优选由DDR3 DRAM颗粒构成,所述内存控制器1连接到CPU的DDR3 DIMM接口,同时,提供后备电源为NVM控制器供电,以及提供SPD(Serial Presence Detect),即内存模组的配置信息。
参见图3,图3为本发明提供的内存控制器1的结构框图,该内存控制器1采用如图2所示的NVM控制器,NVM控制器包括了与CPU连接的DRAM内存接口(即处理器接口12),与NVM缓冲区3连接的缓冲区3控制接口15(即控制接口15),与NVM阵列2连接的NVM管理接口16(即管理接口16),以及在相互之间起到连接关系的数据通道及控制逻辑模块。NVM控制器以″页″的方式读写和缓冲数据,NVM缓冲区3被划分出若干个虚拟的单位空间,每个单位空间存储一页数据。
综合上述内存控制器1的结构,内存控制器1包括数据通道11、处理器接口12、地址存储模块13、缓存模块14、控制接口15、管理接口16、缓冲区页面状态存储模块17及写/读缓冲模块18。
所述数据通道11,用于控制所述地址信息和所述数据的存储、所述数据的写入和/或读出。
所述处理器接口12,连接于所述处理器,用于接收所述处理器的写/读请求, 从所述处理器写入数据,读出数据至所述处理器;即图3中的DRAM内存接口。
所述地址存储模块13,用于存储所述写/读请求中的地址信息;即图3中的地址/读写状态信息框。
所述缓存模块14,用于依据所述地址信息判断自身的空闲状态,并存储所写入/读出的数据;即图3中的Cache框。
所述控制接口15,连接于所述缓冲区3,用于对缓冲区3的数据读写进行管理,具体的,用于检测所述处理器访问的页面对应的单位空间是否存在于所述缓冲区3中,若是,将所述数据写入/读出所述缓冲区3中对应的单位空间,若否,则依据所访问的页面于所述缓冲区3中分配新的单位空间(即另一单位空间),并调入到这个新的单位空间,并将所述数据写入/读出所述缓冲区3;即图3中的缓冲区3控制接口15。此处的控制接口15设计为一个双通道向的控制器接口,既可以与前端的DRAM内存数据通信,又可以与后端NVM数据(即存储单元阵列2)通信,起到桥梁的作用。NVM缓冲区3的工作效率则影响着整个系统的性能,前端的地址/读写状态信息寄存器、Cache、缓冲区3页面状态表都是采用控制器内部的高速静态RAM,对NVM缓冲区3的占用时间较少,但是后端的NVM相对读写速度就低了很多,为此在中间设计了NVM的写缓冲和读缓冲。当需要写NVM时,缓冲区3控制接口15只需要把页数据放到NVM写缓冲区18,后续的工作由NVM管理接口16完成,不会占用缓冲区3控制接口15的时间。当需要读NVM时,控制器通知NVM管理接口16预读取,NVM管理接口16完成读取后先把数据放在NVM读缓冲区18中,并通知缓冲区3控制接口15来取页数据。
所述管理接口16,连接于所述存储单元阵列2,用于将所述数据写入/读出 所述存储单元阵列2。即图3中的NVM管理接口16。所述管理接口16采用多个数据通道连接至所述存储单元阵列2。NVM管理接口16的另一项任务就是提升NVM模块的读写速率。这里,NVM泛指所有类型的非易失性随机访问存储器,目前主流的NVM器件包括:相变存储器(PCM)、电阻存储器(RRAM)、铁电存储器(FRAM)等,本应用中采用了技术相对成熟的相变存储器(PCM),但其读写速度仍然远低于目前通用的DDR3 DRAM存储器件。因为,提升NVM内存模块整体的读写速度成为整个异构混合内存存储方式的工业服务器性能的关键,除了前面提到的采用DRAM进行缓冲外,NVM管理接口16对NVM阵列2的管理方法也是非常关键的环节。在本设计案例中,NVM管理接口16对NVM阵列2采用多通道并行传输的管理方式,把页数据分布到多个独立的通道读写。比如,本案例中用4个通道同时读写NVM阵列2,与1个通道相比读写速率可以提升4倍。
所述缓冲区页面状态存储模块17,用于数据通道存储所述缓冲区3的所述页面对应的单位空间的使用情况;缓冲区页面状态存储模块17即图3中的缓冲区页面状态表的方框。缓冲区页面状态存储模块17可以将缓冲区3的存储情况列为表,该表为一个特殊的存储空间,用于记录NVM缓冲区3的页面使用情况,配合控制器管理页面的调入和调出。
所述写/读缓冲模块18,用于在所述控制接口15与所述管理接口16之间缓冲所读入/写出的数据。即图3中的NVM写缓冲以及NVM读缓冲。
此外,所述缓冲区3还用于在所述单位空间全部被使用时,根据所述使用情况调出使用频率最低的单位空间至所述存储单元阵列2对应的页面中,在从所述存储单元阵列2的页面中调入对应的单位空间。即内存控制器1将数据从 NVM调入NVM缓冲区3遵循以下原则:当NVM缓冲区3有空闲的空间时,占用空闲的空间;当NVM缓冲区3全部被使用时,控制器先将使用频率最低的页调出(存入NVM中),再调入对应的页。
所述内存控制器1还用于将所述缓冲区3的单位空间的修改信息存入所述存储单元阵列2中。为了保证NVM内存模块数据的安全性,在异构混合内存存储方式的工业服务器中做了以下两项数据安全性设计。一是设计了一条数据保存指令,通过该指令将NVM内存模块中NVM缓冲区3做过修改的页主动写入NVM中。二是断电时,截获断电信息,在NVM控制器的控制下自动将NVM内存模块中NVM缓冲区3做过修改的页写入NVM中,以做到对重要数据的实时保护。
参见图4,图4为本发明提供的一种异构混合内存系统200的结构框图,本发明的另一方面还提供一种异构混合内存系统200,包括处理器以及如上述的异构混合内存组件100。图4中,除了包括异构混合内存组件(对应于NVM内存模块)以及处理器(对应于CPU0-CPU3)以外,还包括DRAM内存模块、扩展总线以及IO接口。
参见图5,图5为本发明提供的一种异构混合内存存储方法的流程图,本发明的再一方面还提供一种异构混合内存存储方法,该方法采用上述异构混合内存系统200,包括写入数据及读出数据的步骤;
其中,写入数据的步骤包括:
所述内存控制器1接收所述处理器的写请求,根据所述写请求中的地址信息检测所述处理器访问的页面所对应的单位空间,控制数据从所述处理器通过所述缓冲区3写入至所述存储单元阵列2。
读出数据的步骤包括:
所述内存控制器1接收所述处理器的读请求,根据所述读请求中的地址信息检测所述处理器访问的页面所对应的单位空间,控制数据从所述存储单元阵列2通过所述缓冲区3读出至所述处理器。
所述存储单元阵列2按照第一存储类型并以多个页面的方式存储写入/读出的数据;所述缓冲区3按照第二存储类型并设置对应于所述多个页面的多个单位空间存储写入/读出的数据;所述第二存储方式的读写速率大于所述第一存储方式的读写速率。
参见图6,图6为本发明提供的写入数据的流程图,所述写入数据的步骤包括以下子步骤:
其中,步骤S101-S103为处理器的执行流程,步骤S111-S118为异构混合内存组件100的执行流程。
S101、CPU的应用层提出写NVM数据的请求,转至步骤S111;
S102、CPU的驱动层向NVM内存模块写数据,转至步骤S112;
S103、完成写入数据的步骤;
S111、Cache是否空闲,若是,转至步骤S102,若否,转至步骤S101。
S112、NVM内存模块接收地址/读写信息。
S113、数据保存到Cache中。
S114、判断数据所在页是否在缓冲区3中,若是,转至步骤S118,若否,转至步骤S115。
S115、缓冲区3是否空闲,若是,转至步骤S118,若否,转至步骤S116。
S116、调出缓冲区3中不常用的页面。通过存储所述缓冲区3的所述页面 对应的单位空间的使用情况;所述缓冲区3在所述单位空间全部被使用时,根据所述使用情况调出使用频率最低的单位空间至所述存储单元阵列2对应的页面中,在从所述存储单元阵列2的页面中调入对应的单位空间。
S117、从NVM调入数据所在的页面。
S118、数据写入缓冲区3,转至步骤S103。
综合上述写入数据的步骤,写入数据的步骤可以总结为以下子步骤:
S11、处理器向所述异构混合内存组件100发出写入数据的请求。
S12、异构混合内存组件100判断Cache(即缓存)是否空闲,在缓存空闲情况下控制器将DDR3周期信息放在地址/读写状态信息相关寄存器中。
S13、判断所访问的页面是否在缓冲区3当中,如果在缓冲区3中就直接写入对应的数据;如果不在缓冲区3且写入数据是非整页数据则命令控制接口15将对应的页面从NVM调入缓冲区3,然后再写入对应的数据;如果不在缓冲区3且写入数据是整页数据则写入缓冲区3的空闲单位空间,并更新″缓冲区页面状态表″。
参见图7,图7为本发明提供的读出数据的流程图,所述读出数据的步骤包括以下子步骤:
读出数据的流程中,S201-S205为处理器的执行流程,步骤S211-S216为异构混合内存组件100的执行流程。
S201、应用层提出读NVM数据请求,即异构混合内存组件100执行步骤S211-S216,处理器执行步骤S202-S205。
S202、驱动层从NVM内存模块读数据。
S203、判断数据是否有效,若是,转至步骤S204,若否,转至步骤S202。
S204、屏蔽数据标志的附加信息。
S205、完成读出数据的步骤。
S211、NVM内存模块接收地址/读写信息。
S212、返回数据标志置为无效。
S213、判断访问页面是否在缓冲区3,若是,转至步骤S215,若否,转至步骤S214。
S214、从NVM调入所在页面的数据。
S215、缓冲区3数据调入Cache中。
S216、返回数据标志置为有效。
综合上述读出数据的步骤,读出数据的步骤可以总结为以下子步骤:
S21、处理器向所述异构混合内存组件100发出读出数据的请求。
S22、内存控制器1将DDR3周期信息放在地址/读写状态信息相关寄存器中。
S23、判断所访问单元所在页面是否在缓冲区3当中,如果在缓冲区3中就从Cache中将数据返回给CPU,完成此次操作;如果不在缓冲区3中则命令控制接口15将对应的页面从NVM调入缓冲区3,在下次CPU访问时提供所需要的数据。
上面结合附图对本发明的实施例进行了描述,但是本发明并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本发明的启示下,在不脱离本发明宗旨和权利要求所保护的范围情况下,还可做出很多形式,这些均属于本发明的保护之内。

Claims (10)

  1. 一种异构混合内存组件,其特征在于,包括连接于处理器的内存控制器、存储单元阵列及缓冲区;其中,
    所述内存控制器,用于接收所述处理器的写/读请求,根据所述写/读请求中的地址信息检测所述处理器访问的页面所对应的单位空间,控制数据从所述处理器通过所述缓冲区写入至所述存储单元阵列,或控制数据从所述存储单元阵列通过所述缓冲区读出至所述处理器;
    所述存储单元阵列,用于按照第一存储类型并以多个页面的方式存储写入/读出的数据;
    所述缓冲区,用于按照第二存储类型并设置对应于所述多个页面的多个单位空间存储写入/读出的数据,所述第二存储方式的读写速率大于所述第一存储方式的读写速率。
  2. 根据权利要求1所述的异构混合内存组件,其特征在于,所述内存控制器包括数据通道、处理器接口、地址存储模块、缓存模块、控制接口以及管理接口;其中,
    所述数据通道,用于传输所述地址信息和所述数据的存储、所述数据的写入和/或读出;
    所述处理器接口,连接于所述处理器,用于接收所述处理器的写/读请求,从所述处理器写入数据,读出数据至所述处理器;
    所述地址存储模块,用于存储所述写/读请求中的地址信息;
    所述缓存模块,用于依据所述地址信息判断自身的空闲状态,并存储所写入/读出的数据;
    所述控制接口,连接于所述缓冲区,用于检测所述处理器访问的页面对应的单位空间是否存在于所述缓冲区中,若是,将所述数据写入/读出所述缓冲区,若否,则依据所访问的页面于所述缓冲区中调入对应的单位空间,并将所述数据写入/读出所述缓冲区;
    所述管理接口,连接于所述存储单元阵列,用于将所述数据写入/读出所述存储单元阵列。
  3. 根据权利要求2所述的异构混合内存组件,其特征在于,所述内存控制器还包括缓冲区页面状态存储模块及写/读缓冲模块;其中,
    所述缓冲区页面状态存储模块,用于存储所述缓冲区的所述页面对应的单位空间的使用情况;
    所述写/读缓冲模块,用于在所述控制接口与所述管理接口之间缓冲所读入/写出的数据。
  4. 根据权利要求3所述的异构混合内存组件,其特征在于,所述缓冲区还用于在所述单位空间全部被使用时,根据所述使用情况调出使用频率最低的单位空间至所述存储单元阵列对应的页面中,在从所述存储单元阵列的页面中调入对应的单位空间。
  5. 根据权利要求1所述的异构混合内存组件,其特征在于,所述管理接口采用多个数据通道连接至所述存储单元阵列。
  6. 根据权利要求1所述的异构混合内存组件,其特征在于,所述内存控制器还用于将所述缓冲区的单位空间的修改信息存入所述存储单元阵列中。
  7. 一种异构混合内存系统,包括处理器,其特征在于,还包括如权利要求1-6任一项所述的异构混合内存组件。
  8. 一种异构混合内存存储方法,提供如权利要求7所述的异构混合内存系统,其特征在于,包括写入数据及读出数据的步骤;
    其中,写入数据的步骤包括:
    所述内存控制器接收所述处理器的写请求,根据所述写请求中的地址信息检测所述处理器访问的页面所对应的单位空间,控制数据从所述处理器通过所述缓冲区写入至所述存储单元阵列;
    读出数据的步骤包括:
    所述内存控制器接收所述处理器的读请求,根据所述读请求中的地址信息检测所述处理器访问的页面所对应的单位空间,控制数据从所述存储单元阵列通过所述缓冲区读出至所述处理器;
    所述存储单元阵列按照第一存储类型并以多个页面的方式存储写入/读出的数据;所述缓冲区按照第二存储类型并设置对应于所述多个页面的多个单位空间存储写入/读出的数据;所述第二存储方式的读写速率大于所述第一存储方式的读写速率。
  9. 根据权利要求8所述的异构混合内存存储方法,其特征在于,所述写入数据的步骤包括以下子步骤:
    S11、所述处理器向所述异构混合内存组件发出写入数据的请求。
    S12、所述异构混合内存组件根据所述地址信息连接至所述处理器的寄存器中。
    S13、检测所述处理器访问的页面对应的单位空间是否存在于所述缓冲区中,若是,将所述数据写入所述缓冲区中对应的单位空间,若否,则依据所访问的页面于所述缓冲区中分配另一单位空间,并调入到对应的所述另一单位空间, 并将所述数据写入所述另一单位空间。
  10. 根据权利要求8所述的异构混合内存存储方法,其特征在于,所述读出数据的步骤包括以下子步骤:
    S21、所述处理器向所述异构混合内存组件发出读出数据的请求。
    S22、所述异构混合内存组件根据所述地址信息连接至所述处理器的寄存器中。
    S23、检测所述处理器访问的页面对应的单位空间是否存在于所述缓冲区中,若是,将所述数据读出所述缓冲区,若否,则依据所访问的页面于所述缓冲区中调入对应的单位空间,并将所述数据读出所述缓冲区。
PCT/CN2015/098816 2015-12-25 2015-12-25 一种异构混合内存组件、系统及存储方法 WO2017107162A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/098816 WO2017107162A1 (zh) 2015-12-25 2015-12-25 一种异构混合内存组件、系统及存储方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/098816 WO2017107162A1 (zh) 2015-12-25 2015-12-25 一种异构混合内存组件、系统及存储方法

Publications (1)

Publication Number Publication Date
WO2017107162A1 true WO2017107162A1 (zh) 2017-06-29

Family

ID=59088679

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/098816 WO2017107162A1 (zh) 2015-12-25 2015-12-25 一种异构混合内存组件、系统及存储方法

Country Status (1)

Country Link
WO (1) WO2017107162A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110795045A (zh) * 2019-10-30 2020-02-14 中国科学院微电子研究所 混合内存的数据迁移方法、系统及电子设备
CN111208933A (zh) * 2018-11-21 2020-05-29 北京百度网讯科技有限公司 数据访问的方法、装置、设备和存储介质
CN112559401A (zh) * 2020-12-07 2021-03-26 杭州慧芯达科技有限公司 一种基于pim技术的稀疏矩阵链式访问系统
CN116126747A (zh) * 2023-04-17 2023-05-16 上海云脉芯联科技有限公司 一种缓存方法、缓存架构、异构架构及电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070061507A1 (en) * 2005-09-12 2007-03-15 Shunichi Iwanari Semiconductor storage apparatus
CN103810112A (zh) * 2014-01-28 2014-05-21 华中科技大学 一种非易失性内存系统及其管理方法
CN104102590A (zh) * 2014-07-22 2014-10-15 浪潮(北京)电子信息产业有限公司 一种异构内存管理方法及装置
CN104137084A (zh) * 2011-12-28 2014-11-05 英特尔公司 提高耐久性和抗攻击性的用于pcm缓存的有效动态随机化地址重映射
CN104156318A (zh) * 2014-08-11 2014-11-19 浪潮(北京)电子信息产业有限公司 一种基于异构融合架构的内存管理方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070061507A1 (en) * 2005-09-12 2007-03-15 Shunichi Iwanari Semiconductor storage apparatus
CN104137084A (zh) * 2011-12-28 2014-11-05 英特尔公司 提高耐久性和抗攻击性的用于pcm缓存的有效动态随机化地址重映射
CN103810112A (zh) * 2014-01-28 2014-05-21 华中科技大学 一种非易失性内存系统及其管理方法
CN104102590A (zh) * 2014-07-22 2014-10-15 浪潮(北京)电子信息产业有限公司 一种异构内存管理方法及装置
CN104156318A (zh) * 2014-08-11 2014-11-19 浪潮(北京)电子信息产业有限公司 一种基于异构融合架构的内存管理方法及装置

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111208933A (zh) * 2018-11-21 2020-05-29 北京百度网讯科技有限公司 数据访问的方法、装置、设备和存储介质
US11650754B2 (en) 2018-11-21 2023-05-16 Kunlunxin Technology (Beijing) Company Limited Data accessing method, device, and storage medium
CN111208933B (zh) * 2018-11-21 2023-06-30 昆仑芯(北京)科技有限公司 数据访问的方法、装置、设备和存储介质
CN110795045A (zh) * 2019-10-30 2020-02-14 中国科学院微电子研究所 混合内存的数据迁移方法、系统及电子设备
CN110795045B (zh) * 2019-10-30 2024-04-09 中国科学院微电子研究所 混合内存的数据迁移方法、系统及电子设备
CN112559401A (zh) * 2020-12-07 2021-03-26 杭州慧芯达科技有限公司 一种基于pim技术的稀疏矩阵链式访问系统
CN112559401B (zh) * 2020-12-07 2023-12-22 杭州慧芯达科技有限公司 一种基于pim技术的稀疏矩阵链式访问系统
CN116126747A (zh) * 2023-04-17 2023-05-16 上海云脉芯联科技有限公司 一种缓存方法、缓存架构、异构架构及电子设备
CN116126747B (zh) * 2023-04-17 2023-07-25 上海云脉芯联科技有限公司 一种缓存方法、缓存架构、异构架构及电子设备

Similar Documents

Publication Publication Date Title
US11068170B2 (en) Multi-tier scheme for logical storage management
CN105786400B (zh) 一种异构混合内存组件、系统及存储方法
KR101629615B1 (ko) 저전력 저지연 고용량 스토리지 클래스 메모리용 장치 및 방법
US8560772B1 (en) System and method for data migration between high-performance computing architectures and data storage devices
US20160085585A1 (en) Memory System, Method for Processing Memory Access Request and Computer System
EP3696680B1 (en) Method and apparatus to efficiently track locations of dirty cache lines in a cache in a two level main memory
US8412884B1 (en) Storage system and method of controlling storage system
WO2017107162A1 (zh) 一种异构混合内存组件、系统及存储方法
CN103593324A (zh) 一种具有自学习功能的快速启动低功耗计算机片上系统
US20210397511A1 (en) Nvm endurance group controller using shared resource architecture
WO2023045483A1 (zh) 一种存储设备、数据存储方法及存储系统
WO2018063629A1 (en) Power management and monitoring for storage devices
US9298636B1 (en) Managing data storage
US20230244394A1 (en) Gradually Reclaim Storage Space Occupied by a Proof of Space Plot in a Solid State Drive
CN105786721A (zh) 一种内存地址映射管理方法及处理器
US20190340089A1 (en) Method and apparatus to provide uninterrupted operation of mission critical distributed in-memory applications
US11416403B2 (en) Method and apparatus for performing pipeline-based accessing management in storage server with aid of caching metadata with hardware pipeline module during processing object write command
US20170153994A1 (en) Mass storage region with ram-disk access and dma access
US11604592B2 (en) Data management for efficient low power mode handling in a storage device
US20230376427A1 (en) Memory system and computing system including the same
EP4273703A1 (en) Computing system generating map data, and method of operating the same
EP4273702A1 (en) Operating method of memory device for managing map data of each of plurality of storage devices, computing system including memory device, and operating method of computing system
US20130151766A1 (en) Convergence of memory and storage input/output in digital systems
KR20230160673A (ko) 메모리 시스템 및 이를 포함하는 컴퓨팅 시스템
CN102591681A (zh) 计算机设备以及计算机设备的启动方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15911161

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15911161

Country of ref document: EP

Kind code of ref document: A1