TW201738754A - Method, device, and system for dynamically allocating memory - Google Patents

Method, device, and system for dynamically allocating memory Download PDF

Info

Publication number
TW201738754A
TW201738754A TW106108022A TW106108022A TW201738754A TW 201738754 A TW201738754 A TW 201738754A TW 106108022 A TW106108022 A TW 106108022A TW 106108022 A TW106108022 A TW 106108022A TW 201738754 A TW201738754 A TW 201738754A
Authority
TW
Taiwan
Prior art keywords
memory
server
pcie
space
particles
Prior art date
Application number
TW106108022A
Other languages
Chinese (zh)
Other versions
TWI795354B (en
Inventor
Gong-Biao Niu
Xia-Tao Zhang
Wei Zou
Wen-Tao Zhang
Jin Cai
Shu Li
Original Assignee
Alibaba Group Services Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Services Ltd filed Critical Alibaba Group Services Ltd
Publication of TW201738754A publication Critical patent/TW201738754A/en
Application granted granted Critical
Publication of TWI795354B publication Critical patent/TWI795354B/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1668Details of memory controller

Abstract

A method, device, and system for dynamically allocating memory. The method comprises: receiving a memory allocation request from a server; determining, according to the memory allocation request, and on the basis of a memory pool comprising a plurality of memory chips driven by a PCIE apparatus, whether a total available memory size of one or more memory chips satisfies a requested memory size; and if so, allocating to the server the requested memory size. The method, device, and system can realize dynamic memory allocation for a server.

Description

動態分配記憶體的方法、裝置及系統 Method, device and system for dynamically allocating memory

本發明涉及電腦技術領域,尤其涉及一種動態分配記憶體的方法、裝置及系統。 The present invention relates to the field of computer technology, and in particular, to a method, device and system for dynamically allocating memory.

在電腦系統中,記憶體可謂是決定整機性能的關鍵因素之一。在伺服器系統中,常使用DIMM(Dual-Inline-Memory-Modules,雙列直插式儲存模組)作為記憶體。在DIMM記憶體中的顆粒採用了DIP(Dual Inline Package:雙列直插封裝)封裝。早期的記憶體顆粒是直接焊接在主機板上面的,這樣如果一片記憶體出現故障,那麼整個主機板都要報廢了。後來在主機板上出現了記憶體顆粒插槽,這樣就可以更換記憶體顆粒了。較常用的記憶體顆粒是DRAM(Dynamic Random Access Memory,動態隨機存取記憶體)晶片。DIMM包括有一個或多個DRAM晶片在一個小的積體電路板上,利用這塊電路板上的一些引腳可以直接和電腦主機板相連接。 In computer systems, memory is one of the key factors determining the performance of the whole machine. In the server system, DIMMs (Dual-Inline-Memory-Modules) are often used as the memory. The granules in the DIMM memory are packaged in a DIP (Dual Inline Package). The early memory particles were soldered directly onto the motherboard so that if a piece of memory fails, the entire motherboard is scrapped. Later, a memory particle slot appeared on the motherboard, so that the memory particles could be replaced. The more commonly used memory particles are DRAM (Dynamic Random Access Memory) chips. DIMMs include one or more DRAM chips on a small integrated circuit board that can be directly connected to the motherboard using some of the pins on the board.

DIMM常應用在伺服器系統中。由於DIMM容量限制較大,從8G、16G、32G、64G到128G,單個容量較小, 相對業務快速的變化缺少靈活性。目前的做法是,單台伺服器(單機)各插16根DIMM,或者是通過RAZER卡擴展DIMM,以達到擴充記憶體容量的目的。然而,由於DIMM受限於CPU的DIMM CHANNEL,因此會帶來容量的限制,從而因容量需求形成了不同的規格的伺服器,帶來了管理上的不便與開銷。 DIMMs are often used in server systems. Due to the large capacity limitation of DIMMs, from 8G, 16G, 32G, 64G to 128G, the single capacity is small. The lack of flexibility relative to rapid business changes. The current practice is to insert 16 DIMMs in a single server (single machine), or extend the DIMM through the RAZER card to achieve the purpose of expanding the memory capacity. However, since the DIMM is limited by the DIMM CHANNEL of the CPU, there is a limitation in capacity, which results in a server of different specifications due to the capacity requirement, which brings management inconvenience and overhead.

本發明解決的技術問題之一是提供一種動態分配記憶體的方法、裝置及系統。 One of the technical problems solved by the present invention is to provide a method, device and system for dynamically allocating memory.

根據本發明一方面的一個實施例,提供了一種動態分配記憶體的方法,包括:接收至少一台伺服器的記憶體分配請求;根據所述記憶體分配請求,基於由多個經匯流排界面標準設備驅動的記憶體顆粒組成的記憶體池,判斷所述記憶體池是否具有一個或多個空閒的記憶體顆粒的記憶體總和滿足所請求的記憶體大小;若是,將所請求的記憶體分配給所述伺服器。 According to an embodiment of an aspect of the present invention, a method for dynamically allocating memory includes: receiving a memory allocation request of at least one server; and based on the memory allocation request, based on a plurality of bus interface interfaces a memory pool composed of standard device-driven memory particles, determining whether the memory pool has a memory sum of one or more free memory particles satisfying the requested memory size; if so, the requested memory Assigned to the server.

較佳的,所述記憶體顆粒包括DRAM顆粒,該方法還包括:將DRAM顆粒的DRAM介面轉換成PCIE介面。 Preferably, the memory particles comprise DRAM particles, and the method further comprises: converting the DRAM interface of the DRAM particles into a PCIE interface.

較佳的,所述將DRAM顆粒的DRAM介面轉換成PCIE介面,包括:將DRAM介面通過記憶體緩衝進行容量擴展;將記憶體控制器的輸入連接DRAM顆粒,在記憶體控制器進行雙倍速率DDR記憶體進程到PCIE進程的轉換邏輯,使記憶體控制器的輸出為PCIE介面。 Preferably, the converting the DRAM interface of the DRAM particles into the PCIE interface comprises: expanding the capacity of the DRAM interface through the memory buffer; connecting the input of the memory controller to the DRAM particles, and performing double rate on the memory controller. The conversion logic of the DDR memory process to the PCIE process causes the output of the memory controller to be the PCIE interface.

較佳的,通過PCIE設備驅動DRAM顆粒包括:使能PCIE設備的SRIOV功能;安裝物理功能PF驅動和虛擬功能VF驅動;實現PCIE位址、伺服器位址與記憶體位址的映射,並將位址映射寫入所述PF驅動和VF驅動。 Preferably, driving the DRAM particles through the PCIE device comprises: enabling the SRIOV function of the PCIE device; installing the physical function PF driver and the virtual function VF driver; realizing mapping of the PCIE address, the server address and the memory address, and The address map is written to the PF driver and the VF driver.

較佳的,該方法還包括:對記憶體池進行部署:由一管理單元控制多台伺服器共用記憶體池的記憶體空間;在所述管理單元中運行所述PF驅動,從而將使用者空間與虛擬功能驅動空間ID對應並進行匹配;在各伺服器上運行所述虛擬功能驅動,從而使伺服器發現自身對應的位址空間並進行操作。 Preferably, the method further includes: deploying the memory pool: controlling, by a management unit, the plurality of servers to share the memory space of the memory pool; operating the PF driver in the management unit, thereby The space corresponds to and matches the virtual function driving space ID; the virtual function driver is run on each server, so that the server finds its corresponding address space and operates.

較佳的,該方法還包括:判斷伺服器是否使用完成所分配的記憶體空間,若是,釋放所述記憶體空間。 Preferably, the method further comprises: determining whether the server uses the allocated memory space, and if so, releasing the memory space.

較佳的,若判斷不具有所請求的記憶體空間,所述方法還包括:等待,並判斷是否有新釋放的記憶體空間;若釋放的記憶體空間滿足所請求的記憶體要求,則將釋放的記憶體空間分配給所述伺服器。 Preferably, if it is determined that the requested memory space is not available, the method further includes: waiting, and determining whether there is a newly released memory space; if the released memory space satisfies the requested memory requirement, The released memory space is allocated to the server.

根據本發明一方面的一個實施例,提供了一種動態分配記憶體的裝置,包括:請求接收單元,用於接收至伺服器的記憶體分配請求;判斷單元,用於根據所述記憶體分配請求,基於由多個經匯流排界面標準設備驅動的記憶體顆粒組成的記憶體池,判斷所述記憶體池是否具有一個或多個空閒的記憶體顆粒的記憶體總和滿足所請求的記憶體大小;分配單元,用於將所請求的記憶體分配給所述伺服器。 According to an embodiment of an aspect of the present invention, an apparatus for dynamically allocating a memory includes: a request receiving unit configured to receive a memory allocation request to a server; and a determining unit configured to allocate a request according to the memory Determining whether the memory pool has a memory sum of one or more free memory particles satisfying the requested memory size based on a memory pool composed of a plurality of memory particles driven by the bus interface standard device An allocating unit for allocating the requested memory to the server.

較佳的,所述記憶體顆粒包括DRAM顆粒;該裝置還包括:介面轉換單元,用於將DRAM介面通過記憶體緩衝進行容量擴展;以及,將記憶體控制器的輸入連接DRAM顆粒,在記憶體控制器進行DDR記憶體進程到PCIE進程的轉換邏輯,使記憶體控制器的輸出為PCIE介面,使記憶體控制器的輸出為PCIE介面。 Preferably, the memory particles comprise DRAM particles; the device further comprises: an interface conversion unit for expanding the capacity of the DRAM interface through the memory buffer; and connecting the input of the memory controller to the DRAM particles in the memory The body controller performs the conversion logic of the DDR memory process to the PCIE process, so that the output of the memory controller is the PCIE interface, so that the output of the memory controller is the PCIE interface.

較佳的,該裝置還包括:驅動單元,用於使能PCIE設備的SRIOV功能;安裝PF驅動和VF驅動;以及,實現PCIE位址、伺服器位址與記憶體位址的映射,並將位址映射寫入所述PF驅動和VF驅動。 Preferably, the device further comprises: a driving unit for enabling the SRIOV function of the PCIE device; installing the PF driver and the VF driver; and implementing mapping of the PCIE address, the server address and the memory address, and The address map is written to the PF driver and the VF driver.

較佳的,該裝置還包括:記憶體池部署單元,用於控制多台伺服器共用記憶體池的記憶體空間;在所述管理單元中運行所述PF驅動,從而將使用者空間與VF空間ID對應並進行匹配;以及,在各伺服器上運行所述VF驅動,從而使伺服器發現自身對應的位址空間並進行操作。 Preferably, the device further includes: a memory pool deployment unit, configured to control a memory space shared by the plurality of servers in the memory pool; and running the PF driver in the management unit to thereby user space and VF The space IDs correspond to and match; and the VF driver is run on each server, so that the server finds its own corresponding address space and operates.

較佳的,所述判斷單元還用於,判斷伺服器是否使用完成所分配的記憶體空間;所述裝置還包括:釋放單元,用於釋放使用完成的記憶體空間。 Preferably, the determining unit is further configured to: determine whether the server uses the allocated memory space; and the device further includes: a releasing unit, configured to release the used memory space.

較佳的,所述判斷單元還用於,若判斷不具有所請求的記憶體空間,則等待,並判斷是否有新釋放的記憶體空間;若釋放的記憶體空間滿足所請求的記憶體要求,則指示所述分配單元將釋放的記憶體空間分配給所述伺服器。 Preferably, the determining unit is further configured to: if it is determined that the requested memory space is not available, wait, and determine whether there is a newly released memory space; if the released memory space satisfies the requested memory requirement And instructing the allocating unit to allocate the released memory space to the server.

根據本發明一方面的一個實施例,提供了一種動態分配記憶體的系統,該系統包括:由多個經PCIE設備驅動 的DRAM顆粒組成的記憶體池;一個或多個伺服器;以及,上述的任一項的所述動態分配記憶體的裝置。 According to an embodiment of an aspect of the present invention, a system for dynamically allocating memory is provided, the system comprising: being driven by a plurality of PCIE devices A memory pool composed of DRAM particles; one or more servers; and the device for dynamically allocating memory according to any of the above.

根據本發明另一方面的一個實施例,提供一種記憶體,該記憶體包括多個記憶體顆粒,其中,所述記憶體顆粒經匯流排界面標準設備驅動。 According to an embodiment of another aspect of the present invention, a memory is provided, the memory comprising a plurality of memory particles, wherein the memory particles are driven by a busbar interface standard device.

可見,本發明通過由多個經PCIE設備驅動的DRAM顆粒組成的記憶體池,將伺服器與記憶體通過PCIE來分離,可以通過PCIE交換實現不同伺服器對記憶體的動態分配和按需分配。較佳的,在將DRAM顆粒的介面轉換成PCIE介面過程中,通過記憶體緩衝進行容量擴展。另外,由於通過記憶體緩衝擴展了記憶體顆粒,且通過對記憶體池的動態分配和按需分配,因此與標準記憶體相比,不需要增加整個記憶體條,因此成本有較低的優勢。而且,相比於現有標準的記憶體必須停機維護,PCIE設備可以進行熱插拔,因此可維護性增強。 It can be seen that the present invention separates the server and the memory through the PCIE through a memory pool composed of a plurality of DRAM particles driven by the PCIE device, and can realize dynamic allocation and on-demand allocation of memory by different servers through PCIE exchange. . Preferably, the capacity expansion is performed by the memory buffer during the process of converting the interface of the DRAM particles into the PCIE interface. In addition, since the memory particles are expanded by the memory buffer, and the dynamic allocation and on-demand allocation of the memory pool, there is no need to increase the entire memory strip compared with the standard memory, so the cost has a lower advantage. . Moreover, compared to the existing standard memory, the PCIE device can be hot swapped, so the maintainability is enhanced.

本領域普通技術人員將瞭解,雖然下面的詳細說明將參考圖示實施例、附圖進行,但本發明並不僅限於這些實施例。而是,本發明的範圍是廣泛的,且意在僅通過後附的申請專利範圍限定本發明的範圍。 Those skilled in the art will appreciate that although the following detailed description is made with reference to the illustrated embodiments and drawings, the invention is not limited to these embodiments. Rather, the scope of the invention is intended to be limited only by the scope of the appended claims.

501‧‧‧請求接收單元 501‧‧‧Request receiving unit

502‧‧‧判斷單元 502‧‧‧judging unit

503‧‧‧分配單元 503‧‧‧Distribution unit

504‧‧‧介面轉換單元 504‧‧‧Interface conversion unit

505‧‧‧驅動單元 505‧‧‧ drive unit

506‧‧‧記憶體池部署單元 506‧‧‧ memory pool deployment unit

507‧‧‧釋放單元 507‧‧‧ release unit

通過閱讀參照以下附圖所作的對非限制性實施例所作的詳細描述,本發明的其它特徵、目的和優點將會變得更明顯: 圖1是根據本發明實施例的動態分配記憶體的方法的流程圖;圖2是根據本發明實施例的動態分配記憶體的方法中一組DRAM介面轉換成PCIE介面的示意圖;圖3是根據本發明實施例的動態分配記憶體的方法中單顆粒DRAM介面轉換成PCIE介面的示意圖;圖4是根據本發明實施例的動態分配記憶體的方法中基於PCIE介面的DRAM池部署示意圖;圖5是根據本發明實施例的動態分配記憶體的裝置的結構示意圖。 Other features, objects, and advantages of the present invention will become more apparent from the Detailed Description of Description 1 is a flow chart of a method for dynamically allocating memory according to an embodiment of the present invention; FIG. 2 is a schematic diagram of converting a set of DRAM interfaces into a PCIE interface in a method for dynamically allocating memory according to an embodiment of the present invention; FIG. 4 is a schematic diagram of a DRAM pool deployment based on a PCIE interface in a method for dynamically allocating memory according to an embodiment of the present invention; FIG. 4 is a schematic diagram of a DRAM pool deployment based on a PCIE interface in a method for dynamically allocating memory according to an embodiment of the present invention; It is a schematic structural diagram of an apparatus for dynamically allocating memory according to an embodiment of the present invention.

本領域普通技術人員將瞭解,雖然下面的詳細說明將參考圖示實施例、附圖進行,但本發明並不僅限於這些實施例。而是,本發明的範圍是廣泛的,且意在僅通過後附的申請專利範圍限定本發明的範圍。 Those skilled in the art will appreciate that although the following detailed description is made with reference to the illustrated embodiments and drawings, the invention is not limited to these embodiments. Rather, the scope of the invention is intended to be limited only by the scope of the appended claims.

在更加詳細地討論示例性實施例之前應當提到的是,一些示例性實施例被描述成作為流程圖描繪的處理或方法。雖然流程圖將各項操作描述成順序的處理,但是其中的許多操作可以被並行地、併發地或者同時實施。此外,各項操作的順序可以被重新安排。當其操作完成時所述處理可以被終止,但是還可以具有未包括在附圖中的附加步驟。所述處理可以對應於方法、函數、規程、子常式、副程式等等。 Before discussing the exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as a process or method depicted as a flowchart. Although the flowcharts describe various operations as a sequential process, many of the operations can be implemented in parallel, concurrently or concurrently. In addition, the order of operations can be rearranged. The process may be terminated when its operation is completed, but may also have additional steps not included in the figures. The processing may correspond to methods, functions, procedures, sub-routines, sub-programs, and the like.

所述電腦設備包括使用者設備與網路設備。其中,所述使用者設備包括但不限於電腦、智慧手機、PDA等;所述網路設備包括但不限於單個網路伺服器、多個網路伺服器組成的伺服器組或基於雲計算(Cloud Computing)的由大量電腦或網路伺服器構成的雲,其中,雲計算是分散式運算的一種,由一群鬆散耦合的電腦集組成的一個超級虛擬電腦。其中,所述電腦設備可單獨運行來實現本發明,也可接入網路並通過與網路中的其他電腦設備的交交互操作來實現本發明。其中,所述電腦設備所處的網路包括但不限於網際網路、廣域網路、都會區網路、局域網、VPN網路等。 The computer device includes a user device and a network device. The user equipment includes, but is not limited to, a computer, a smart phone, a PDA, etc.; the network device includes but is not limited to a single network server, a server group composed of multiple network servers, or a cloud-based computing system ( Cloud Computing is a cloud of computers or web servers. Cloud computing is a kind of decentralized computing. It is a super virtual computer composed of a group of loosely coupled computers. Wherein, the computer device can be operated separately to implement the present invention, and can also access the network and implement the present invention by interacting with other computer devices in the network. The network where the computer device is located includes, but is not limited to, an internet, a wide area network, a metropolitan area network, a local area network, a VPN network, and the like.

需要說明的是,所述使用者設備、網路設備和網路等僅為舉例,其他現有的或今後可能出現的電腦設備或網路如可適用於本發明,也應包含在本發明保護範圍以內,並以引用方式包含於此。 It should be noted that the user equipment, the network equipment, the network, and the like are merely examples, and other existing or future computer equipment or networks may be applicable to the present invention, and should also be included in the scope of the present invention. Within, and by reference.

後面所討論的方法(其中一些通過流程圖示出)可以通過硬體、軟體、韌體、中介軟體、微代碼、硬體描述語言或者其任意組合來實施。當用軟體、韌體、中介軟體或微代碼來實施時,用以實施必要任務的程式碼或程式碼片段可以被儲存在機器或電腦可讀介質(比如儲存介質)中。(一個或多個)處理器可以實施必要的任務。 The methods discussed below, some of which are illustrated by flowcharts, can be implemented by hardware, software, firmware, mediation software, microcode, hardware description language, or any combination thereof. When implemented in software, firmware, mediation software, or microcode, the code or code segments used to perform the necessary tasks can be stored in a machine or computer readable medium (such as a storage medium). The processor(s) can perform the necessary tasks.

這裡所公開的具體結構和功能細節僅僅是代表性的,並且是用於描述本發明的示例性實施例的目的。但是本發明可以通過許多替換形式來具體實現,並且不應當被解釋 成僅僅受限於這裡所闡述的實施例。 The specific structural and functional details disclosed are merely representative and are for the purpose of describing exemplary embodiments of the invention. However, the invention may be embodied in many alternative forms and should not be construed The invention is limited only to the embodiments set forth herein.

應當理解的是,雖然在這裡可能使用了術語“第一”、“第二”等等來描述各個單元,但是這些單元不應當受這些術語限制。使用這些術語僅僅是為了將一個單元與另一個單元進行區分。舉例來說,在不背離示例性實施例的範圍的情況下,第一單元可以被稱為第二單元,並且類似地第二單元可以被稱為第一單元。這裡所使用的術語“和/或”包括其中一個或更多所列出的相關聯專案的任意和所有組合。 It should be understood that although the terms "first," "second," etc. may be used herein to describe the various elements, these elements should not be limited by these terms. These terms are used only to distinguish one unit from another. For example, a first unit could be termed a second unit, and similarly a second unit could be termed a first unit, without departing from the scope of the exemplary embodiments. The term "and/or" used herein includes any and all combinations of one or more of the associated listed items.

應當理解的是,當一個單元被稱為“連接”或“耦合”到另一單元時,其可以直接連接或耦合到所述另一單元,或者可以存在中間單元。與此相對,當一個單元被稱為“直接連接”或“直接耦合”到另一單元時,則不存在中間單元。應當按照類似的方式來解釋被用於描述單元之間的關係的其他詞語(例如“處於...之間”相比於“直接處於...之間”,“與...鄰近”相比於“與...直接鄰近”等等)。 It will be understood that when a unit is referred to as "connected" or "coupled" to another unit, it can be directly connected or coupled to the other unit, or an intermediate unit can be present. In contrast, when a unit is referred to as being "directly connected" or "directly coupled" to another unit, there is no intermediate unit. Other words used to describe the relationship between the units should be interpreted in a similar manner (eg "between" and "directly between" and "adjacent to" Than "directly adjacent to", etc.).

這裡所使用的術語僅僅是為了描述具體實施例而不意圖限制示例性實施例。除非上下文明確地另有所指,否則這裡所使用的單數形式“一個”、“一項”還意圖包括複數。還應當理解的是,這裡所使用的術語“包括”和/或“包含”規定所陳述的特徵、整數、步驟、操作、單元和/或元件的存在,而不排除存在或添加一個或更多其他特徵、整數、步驟、操作、單元、元件和/或其組合。 The terminology used herein is for the purpose of describing the particular embodiments, The singular forms "a", "an", It is also to be understood that the terms "comprises" and / or "comprising", "the", "the" Other features, integers, steps, operations, units, elements, and/or combinations thereof.

還應當提到的是,在一些替換實現方式中,所提到的 功能/動作可以按照不同於附圖中標示的順序發生。舉例來說,取決於所涉及的功能/動作,相繼示出的兩幅圖實際上可以基本上同時執行或者有時可以按照相反的順序來執行。 It should also be mentioned that in some alternative implementations, the mentioned The functions/acts can occur in an order different from that indicated in the drawings. For example, two figures shown in succession may in fact be executed substantially concurrently or sometimes in the reverse order, depending on the function/acts involved.

首先對本發明實施例中的專業術語說明如下。 The technical terms in the embodiments of the present invention are first explained as follows.

記憶體顆粒:也稱記憶體晶片或記憶體晶片,常指記憶體條的記憶體單元。 Memory particles: Also known as memory chips or memory chips, often referred to as memory cells of a memory strip.

DRAM顆粒:由DRAM晶片作為記憶體顆粒。 DRAM particles: from DRAM wafers as memory particles.

PCIE(Peripheral Component Interface Express,匯流排界面標準)設備是指支援PCIE協定的設備。 The PCIE (Peripheral Component Interface Express) device refers to a device that supports the PCIE protocol.

SRIOV功能,允許在設備/虛擬機器之間高效共用PCIE設備。 The SRIOV feature allows efficient sharing of PCIE devices between devices/virtual machines.

VF(Virtual Function,虛擬功能)驅動,主要用於發現PCIE設備。 VF (Virtual Function) driver is mainly used to discover PCIE devices.

PF(Physical Function,物理功能)驅動,主要用於管理記憶體的各使用者空間的位址與VF的ID的對應關係。 The PF (Physical Function) driver is mainly used to manage the correspondence between the address of each user space of the memory and the ID of the VF.

下面結合附圖對本發明的技術方案作進一步詳細描述。 The technical solution of the present invention will be further described in detail below with reference to the accompanying drawings.

圖1是根據本發明實施例的動態分配記憶體的方法的流程圖。本實施例的方法主要包括如下步驟:S110、接收伺服器的記憶體分配請求;S120、根據記憶體分配請求,基於由多個經PCIE設備驅動的記憶體顆粒組成的記憶體池,判斷記憶體池是否 具有一個或多個空閒的記憶體顆粒滿足所請求的記憶體空間;若有所請求的記憶體空間,執行步驟S130;若沒有所請求的記憶體空間,執行步驟S140。 1 is a flow chart of a method of dynamically allocating memory in accordance with an embodiment of the present invention. The method of this embodiment mainly includes the following steps: S110: receiving a memory allocation request of a server; S120, determining, according to a memory allocation request, a memory pool based on a memory pool composed of a plurality of memory particles driven by a PCIE device Whether the pool One or more free memory particles satisfy the requested memory space; if there is a requested memory space, step S130 is performed; if there is no requested memory space, step S140 is performed.

S130、將所請求的記憶體空間分配給伺服器;S140、等待;S150、判斷是否有新釋放的記憶體空間,並判斷釋放的記憶體空間是否滿足所請求的記憶體要求;若釋放的記憶體空間滿足所請求的記憶體要求,執行S160;若不滿足,返回S140,繼續等待。 S130, allocating the requested memory space to the server; S140, waiting; S150, determining whether there is a newly released memory space, and determining whether the released memory space satisfies the requested memory requirement; if the released memory The body space satisfies the requested memory requirement, and S160 is executed; if not, the process returns to S140 and continues to wait.

S160、將釋放的記憶體空間分配給伺服器。 S160. Allocate the released memory space to the server.

上述記憶體池是由多個經PCIE設備驅動的記憶體顆粒組成,其中,記憶體顆粒可以是DRAM顆粒。這意味著,需要對DRAM顆粒進行介面轉換和協定轉換,並且,需要利用PCIE設備對介面轉換後的DRAM顆粒進行驅動,以及,需要對基於PCIE介面的DRAM池進行部署。 The memory pool is composed of a plurality of memory particles driven by a PCIE device, wherein the memory particles may be DRAM particles. This means that interface conversion and protocol conversion of DRAM particles are required, and the interface-converted DRAM particles need to be driven by PCIE devices, and the PCIE interface-based DRAM pool needs to be deployed.

為進一步理解本發明,下面對方案從上述幾個方面做進一步詳細介紹。 In order to further understand the present invention, the following aspects are further described in detail from the above aspects.

首先,介紹將DRAM介面轉PCIE介面的實現。 First, the implementation of the DRAM interface to PCIE interface is introduced.

參見圖2,是根據本發明實施例的動態分配記憶體的方法中一組DRAM介面轉換成PCIE介面的示意圖。 2 is a schematic diagram of converting a set of DRAM interfaces into a PCIE interface in a method of dynamically allocating memory according to an embodiment of the invention.

在具體實現上,可借助FPGA實現介面轉換及協定轉換功能。 In the specific implementation, the interface conversion and protocol conversion functions can be implemented by means of the FPGA.

將DRAM介面轉PCIE介面的過程包括: The process of transferring the DRAM interface to the PCIE interface includes:

(步驟1)DRAM介面通過MEMORY BUFFER進行容量擴展。 (Step 1) The DRAM interface is capacity-expanded by MEMORY BUFFER.

本發明提到的記憶體是指伺服器記憶體。本領域技術人員理解,伺服器記憶體也是記憶體,它與普通PC機記憶體在外觀和結構上沒有什麼明顯實質性的區別,它主要是在記憶體上引入了一些新的技術。 The memory referred to in the present invention refers to a server memory. Those skilled in the art understand that the server memory is also a memory, and it has no obvious substantive difference between the appearance and the structure of the ordinary PC memory. It mainly introduces some new technologies into the memory.

例如,伺服器記憶體可分為具有緩存的Buffer記憶體和不具備緩存的Unbuffer記憶體。Buffer即暫存器,也可理解成快取記憶體,在伺服器及圖形工作站記憶體有較多應用,容量多為64K,但隨著記憶體容量的不斷增大,其容量也不斷增加,具有Buffer的記憶體將對記憶體的讀寫速度有較大提高。有Buffer的記憶體幾乎都帶有ECC(Error Checking and Correcting,錯誤檢查和糾正)功能。 For example, server memory can be divided into Buffer memory with cache and Unbuffer memory without cache. Buffer is a scratchpad, which can also be understood as a cache memory. It has many applications in the memory of the server and the graphics workstation. The capacity is mostly 64K, but as the capacity of the memory increases, its capacity also increases. Memory with Buffer will greatly improve the read and write speed of the memory. Memory with Buffer almost always has ECC (Error Checking and Correcting) function.

Register即寄存器或目錄寄存器,在記憶體上的作用可理解成書的目錄,通過Register,當記憶體接到讀寫指令時,會先檢索此目錄,然後再進行讀寫操作,這將大大提高伺服器記憶體工作效率。帶有Register(寄存器或目錄寄存器)的記憶體一定帶Buffer,並且目前能見到的Register記憶體也都具有ECC功能。 Register is a register or a directory register. The role on the memory can be understood as a book directory. Through Register, when the memory is connected to the read and write instructions, the directory will be retrieved first, then read and write operations, which will greatly improve Server memory efficiency. Memory with Register (register or directory register) must have a Buffer, and the Register memory currently visible also has ECC functionality.

LRDIMM(Load-Reduced DIMM,低負載DIMM)通過使用新的技術和較低的工作電壓,達到降低伺服器記憶體匯流排負載和功耗的目的,並讓伺服器記憶體匯流排可 以達到更高的工作頻率並大幅提升記憶體支援容量。 LRDIMMs (Load-Reduced DIMMs) achieve the goal of reducing server bus load and power consumption by using new technology and lower operating voltage, and allowing the server memory bus to be used. In order to achieve a higher operating frequency and significantly increase the memory support capacity.

對於通常的Unbuffered DIMM,伺服器使用的Registered DIMM通過在記憶體條上緩衝信號並重動記憶體顆粒來提升記憶體支援容量,而LRDIMM記憶體通過將當前RDIMM記憶體上的Register晶片改為一種iMB(isolation Memory Buffer)記憶體隔離緩衝晶片來降低記憶體匯流排的負載,並相應地進一步提升記憶體支援容量。 For normal Unbuffered DIMMs, the Registered DIMM used by the server boosts the memory support capacity by buffering the signal on the memory bank and replaying the memory particles, while the LRDIMM memory changes the Register wafer on the current RDIMM memory to an iMB. (isolation Memory Buffer) The memory isolation buffer chip reduces the load on the memory bus and further increases the memory support capacity accordingly.

本發明中的DIMM不做類型限制,即,本發明DIMM涵蓋目前或未來的各類型的DIMM。並且,通過DIMM的Memory Buffer(記憶體緩衝)功能實現記憶體容量提升。 The DIMMs of the present invention are not limited in type, i.e., the DIMMs of the present invention cover current or future types of DIMMs. Also, the memory capacity is increased by the Memory Buffer function of the DIMM.

在上述步驟(1)之後,通過將FPGA的高速IO腳與DIMM的介面進行連接,然後定義這些引腳,內部邏輯實現是將這些高速的引腳的信號在內部類比出一個MEMORY控制器。 After the above step (1), by connecting the high-speed IO pin of the FPGA to the interface of the DIMM and then defining these pins, the internal logic implementation is to internally compare the signals of these high-speed pins to a MEMORY controller.

DRAM具有功耗低、集成度高(單片容量大)、價格便宜等優點,但是對DRAM進行控制相對複雜,且需要定時刷新,因此需要設計DRAM控制器。FPGA(Field Programmable Gate Array)提供大容量可程式設計邏輯,可設計出符合特定要求的DRAM控制器(MEMORY控制器)。由於FPGA是CMOS工藝,其功耗非常小,同時,FPGA可重複編寫,從而方便進行性能擴展。如必要,只需改變FPGA內部邏輯,即能適合不同設計需求或環境要 求。除了採用FPGA實現記憶體控制器,還可以採用其他可程式設計邏輯器件實現,例如CPLD(Complex Programmable Logic Device,複雜可程式設計邏輯器件)、PLD(Programmable Logic Device,複雜可程式設計邏輯器件)等。 DRAM has the advantages of low power consumption, high integration (large single-chip capacity), and low price, but the control of DRAM is relatively complicated and requires timing refresh, so it is necessary to design a DRAM controller. The FPGA (Field Programmable Gate Array) provides high-capacity programmable logic to design a DRAM controller (MEMORY controller) that meets specific requirements. Since the FPGA is a CMOS process, its power consumption is very small, and at the same time, the FPGA can be rewritten to facilitate performance expansion. If necessary, simply change the internal logic of the FPGA to suit different design needs or environments. begging. In addition to using FPGA to implement the memory controller, other programmable logic devices can be implemented, such as CPLD (Complex Programmable Logic Device), PLD (Programmable Logic Device), etc. .

(步驟2)MEMORY控制器的輸入連接DRAM顆粒,因此MEMORY控制器的輸入為支持DDR(double data rate,雙倍速率)進程的DDR單元。 (Step 2) The input of the MEMORY controller is connected to the DRAM pellet, so the input of the MEMORY controller is a DDR unit supporting a DDR (double data rate) process.

(步驟3)MEMORY控制器的出口為支援NVME(Non-Volatile Memory Express,非易失性記憶體標準)與PCIE(Peripheral Component Interface Express,匯流排界面標準)的高速SERDES(SERializer/DESerializer,串列器/解串器),從而與PCIE設備的PCIE介面匹配。 (Step 3) The exit of the MEMORY controller is a high-speed SERDES (SERializer/DESerializer) supporting NVME (Non-Volatile Memory Express) and PCIE (Peripheral Component Interface Express). /Deserializer) to match the PCIE interface of the PCIE device.

本例子中,以DDR記憶體為例進行說明,其具有傳輸速率快的優點。 In this example, the DDR memory is taken as an example, which has the advantage of a fast transmission rate.

NVMe是與AHCI一樣都是邏輯裝置介面標準,是使用PCI-E通道的SSD一種規範,NVMe的設計之初就有充分利用到PCI-E SSD的低延時以及並行性,還有當代處理器、平臺與應用的並行性。SSD的並行性可以充分被主機的硬體與軟體充分利用,相比與現在的AHCI標準,NVMe標準可以帶來多方面的性能提升。 NVMe is a logical device interface standard like AHCI. It is a specification of SSD using PCI-E channel. NVMe is designed to take full advantage of the low latency and parallelism of PCI-E SSD, as well as contemporary processors. Parallelism between platform and application. The parallelism of SSD can be fully utilized by the host hardware and software. Compared with the current AHCI standard, the NVMe standard can bring various performance improvements.

PCIE屬於高速串列點對點雙通道高頻寬傳輸,所連接的設備分配獨享通道頻寬,不共用匯流排頻寬,主要支 援主動電源管理、錯誤報告、端對端的可靠性傳輸、熱插拔以及服務品質(QOS)等功能。PCIE的主要優勢就是資料傳輸速率高,例如,目前最高的16X 2.0版本可達到10GB/s,而且還有相當大的發展潛力。 PCIE belongs to high-speed serial point-to-point dual-channel high-bandwidth transmission. The connected devices are allocated exclusive channel bandwidth, and the busbar bandwidth is not shared. Supports active power management, error reporting, end-to-end reliability transfer, hot swap and quality of service (QOS). The main advantage of PCIE is the high data transfer rate. For example, the current highest 16X 2.0 version can reach 10GB/s, and there is considerable potential for development.

這裡需要將DDR、PCIE與NVME的進程/協定進行硬體語言的邏輯翻譯實現一整套邏輯,本例子中,在FPGA內部實現。為了實現將DRAM介面轉換為PCIE介面轉換,FPGA內部需要識別DDR、PCIE與NVME,並進行邏輯轉換。參見圖2,FPGA內部包括DDR單元、NVMe單元和PCIE單元,其中,DDR單元作為輸入,支援DDR Process,與一組DRAM顆粒的統一介面連接;NVMe單元,支援NVM協定,連接DDR單元和PCIE單元;PCIE單元,支援PCIE協定,其作為FPGA輸出,提供與PCIe設備連接的PCIE介面。 Here, it is necessary to implement a complete set of logic for logical translation of DDR, PCIE and NVME processes/protocols in hardware language. In this example, it is implemented inside the FPGA. In order to convert the DRAM interface to PCIE interface conversion, the FPGA needs to identify DDR, PCIE and NVME and perform logic conversion. Referring to FIG. 2, the FPGA internally includes a DDR unit, an NVMe unit, and a PCIE unit. The DDR unit is used as an input to support the DDR Process and is connected to a unified interface of a group of DRAM particles. The NVMe unit supports the NVM protocol and connects the DDR unit and the PCIE unit. The PCIE unit supports the PCIE protocol, which acts as an FPGA output and provides a PCIE interface to the PCIe device.

通過上述方式,即可將一組DRAM介面轉換成PCIE介面,例如轉換為PCIE X8介面。 In this way, a set of DRAM interfaces can be converted into a PCIE interface, for example, converted to a PCIE X8 interface.

參見圖3,是根據本發明實施例的動態分配記憶體的方法中單顆粒DRAM介面轉換成PCIE介面的示意圖。圖3方式與圖2方式相比,區別在於,並不是更改一組DRAM顆粒的介面,而是更改單個DRAM顆粒的介面,這兩種方式都可以達到相同的目的。 3 is a schematic diagram of a single-particle DRAM interface converted into a PCIE interface in a method for dynamically allocating memory according to an embodiment of the invention. The difference between the mode of Figure 3 and the mode of Figure 2 is that instead of changing the interface of a group of DRAM particles, the interface of a single DRAM particle is changed, both of which can achieve the same purpose.

將DRAM介面轉PCIE介面的過程包括: The process of transferring the DRAM interface to the PCIE interface includes:

(步驟1)DRAM介面通過MEMORY BUFFER進行容量擴展。 (Step 1) The DRAM interface is capacity-expanded by MEMORY BUFFER.

(步驟2)MEMORY控制器的輸入連接DRAM顆粒,因此MEMORY控制器的輸入為支援DDR(double data rate)協定的DDR單元。 (Step 2) The input of the MEMORY controller is connected to the DRAM pellet, so the input of the MEMORY controller is a DDR unit supporting DDR (double data rate) protocol.

(步驟3)MEMORY控制器的出口為支援PCIE(Peripheral Component Interface Express,匯流排界面標準)的PCIE單元,從而與PCIE設備的PCIE介面匹配。 (Step 3) The exit of the MEMORY controller is a PCIE unit supporting PCIE (Peripheral Component Interface Express) to match the PCIE interface of the PCIE device.

在具體實現上,可以借助晶片實現上述介面轉換。將DRAM介面到PCIE介面轉換在DRAM晶片封裝時完成,即DRAM二次封裝後介面為PCIE的介面晶片。 In a specific implementation, the above interface conversion can be implemented by means of a wafer. The conversion of the DRAM interface to the PCIE interface is completed in the DRAM chip package, that is, the interface of the DRAM secondary package is a PCIE interface chip.

下面,介紹利用PCIE設備對轉換後的DRAM顆粒進行驅動的實現細節。 Below, the implementation details of driving the converted DRAM particles using a PCIE device are described.

在經過上述圖2或圖3所示的介面轉換後,已經得到基於PCIE介面的DRAM顆粒,接下來,需要對基於PCIE介面的DRAM進行驅動,由此為多伺服器共用記憶體容量或是單台伺服器內多個虛擬機器共用記憶體容量做準備。 After the interface conversion shown in FIG. 2 or FIG. 3 above, the DRAM particles based on the PCIE interface have been obtained. Next, the DRAM based on the PCIE interface needs to be driven, thereby sharing the memory capacity or the single server for multiple servers. A plurality of virtual machines in the server share the memory capacity to prepare.

本領域技術人員瞭解,SRIOV技術是一種基於硬體的虛擬化解決方案,可提高性能和可伸縮性。SRIOV標準允許在虛擬機器之間高效共用PCIE設備,並且它是在硬體中實現的,可以獲得能夠與本機性能相當的I/O性能。SRIOV規範定義了新的標準,根據該標準,創建的新設備可允許將虛擬機器直接連接到I/O設備。單個I/O資源可由許多虛擬機器共用。共用的設備將提供專用的資源,並且還使用共用的通用資源。這樣,每個虛擬機器都可訪問 唯一的資源。因此,啟用了SRIOV並且具有適當的硬體和OS支援的PCIE設備可以顯示為多個單獨的物理設備,每個都具有自己的PCIE配置空間。 Those skilled in the art understand that SRIOV technology is a hardware-based virtualization solution that improves performance and scalability. The SRIOV standard allows for efficient sharing of PCIE devices between virtual machines, and it is implemented in hardware to achieve I/O performance comparable to native performance. The SRIOV specification defines a new standard by which new devices are created that allow virtual machines to be directly connected to I/O devices. A single I/O resource can be shared by many virtual machines. Shared devices will provide dedicated resources and also use shared common resources. In this way, each virtual machine is accessible The only resource. Therefore, a PCIE device with SRIOV enabled and with appropriate hardware and OS support can be displayed as multiple separate physical devices, each with its own PCIE configuration space.

SRIOV中主要的兩種功能是:(1)物理功能(Physical Function,PF):用於支援SRIOV功能的PCI功能,如SRIOV規範中定義。PF包含SRIOV功能結構,用於管理SRIOV功能。PF是全功能的PCIE功能,可以像其他任何PCIE設備一樣進行發現、管理和處理。PF擁有完全配置資源,可以用於配置或控制PCIE設備。(2)虛擬功能(Virtual Function,VF):與物理功能關聯的一種功能。VF是一種羽量級PCIE功能,可以與物理功能以及與同一物理功能關聯的其他VF共用一個或多個物理資源。VF僅允許擁有用於其自身行為的配置資源。 The two main functions in SRIOV are: (1) Physical Function (PF): PCI function to support SRIOV function, as defined in the SRIOV specification. The PF contains the SRIOV functional structure for managing SRIOV functions. PF is a full-featured PCIE feature that can be discovered, managed, and processed just like any other PCIE device. PF has fully configured resources that can be used to configure or control PCIE devices. (2) Virtual Function (VF): A function associated with a physical function. VF is a featherweight PCIE function that shares one or more physical resources with physical functions and other VFs associated with the same physical function. VF only allows configuration resources for its own behavior.

本發明中,對基於PCIE介面的DRAM進行驅動,主要包括以下流程: In the present invention, driving the DRAM based on the PCIE interface mainly includes the following processes:

(1)首先是使能PCIE硬體的SRIOV功能 (1) First, enable the SRIOV function of the PCIE hardware.

這個功能使能是說明這個PCIE設備支援SRIOV功能,可以在OS層面發現這個功能設備。 This feature enable indicates that this PCIE device supports the SRIOV function and can be discovered at the OS level.

(2)PF驅動 (2) PF drive

安裝在伺服器或者是管理機器中的一個驅動,這個驅動主要是為了管理記憶體的各使用者空間的位址與VF的ID的對應關係,即PF可以看到PCIE設備的全部空間的ID對應的位址,可以進行管理。 Installed in the server or a driver in the management machine. This driver is mainly used to manage the correspondence between the address of each user space of the memory and the ID of the VF. That is, the PF can see the ID of all the spaces of the PCIE device. The address can be managed.

(3)VF驅動 (3) VF drive

安裝在虛擬機器中的一個驅動,主要是用於發現PCIE設備。 A driver installed in a virtual machine, mainly used to discover PCIE devices.

(4)位址映射 (4) Address mapping

主要是實現PCIE的位址、伺服器(虛擬機器)位址與記憶體位址的映射。 Mainly to achieve the mapping of PCIE address, server (virtual machine) address and memory address.

(5)載入驅動時完成記憶體位址映射 (5) Completing memory address mapping when loading the driver

驅動載入時完成以上這些位址映射,以便在記憶體中看到這塊PCIE空間。 These address mappings are done while the driver is loading to see this PCIE space in memory.

最後,介紹對基於PCIE介面的DRAM池進行部署的實現細節。 Finally, the implementation details of deploying a DRAM pool based on the PCIE interface are introduced.

在完成上述介面轉換、基於PCIE介面的DRAM驅動之後,即得到由多個基於PCIE介面的DRAM顆粒組成的記憶體池,下面就需要對記憶體池進行合理部署和管理,從而高效進行記憶體分配,達到多台伺服器共用同一記憶體池的目的。 After completing the above interface conversion and PCIE interface-based DRAM driving, a memory pool composed of a plurality of DRAM particles based on PCIE interface is obtained, and the following needs to properly deploy and manage the memory pool, thereby efficiently performing memory allocation. , to achieve the purpose of multiple servers sharing the same memory pool.

對基於PCIE介面的DRAM池進行部署主要包括以下流程: The deployment of the DRAM pool based on the PCIE interface mainly includes the following processes:

(1)多台物理機分享記憶體池的記憶體空間 (1) Multiple physical machines share the memory space of the memory pool

多台主機共用這個記憶體池中的儲存空間。 Multiple hosts share the storage space in this memory pool.

(2)管理單元管理PF驅動運行 (2) Management unit management PF driver operation

管理單元中運行的是PF的驅動,作用是將使用者空間與VF的空間ID對應並進行靈活的線上匹配。 The PF driver is running in the management unit. The function is to match the user space with the space ID of the VF and perform flexible online matching.

(3)在SERVER上運行VF驅動: (3) Run the VF driver on SERVER:

上述準備完成後,VF會在各伺服器上運行,伺服器 發現自己所看到位址空間,可以對空間進行操作。 After the above preparation is completed, VF will run on each server, the server Find the address space you see and you can manipulate the space.

(4)每個SERVER上的記憶體空間分配由管理單元來統一管理,靈活線上分配。 (4) The memory space allocation on each SERVER is managed by the management unit and distributed on the flexible line.

參見圖4,是根據本發明實施例的動態分配記憶體的方法中基於PCIE介面的DRAM池部署示意圖。 4 is a schematic diagram of a DRAM pool deployment based on a PCIE interface in a method for dynamically allocating memory according to an embodiment of the invention.

圖4中,示出了多台伺服器、由多個經PCI設備驅動的DRAM顆粒組成的記憶體池、管理單元、PCIE開關。其中,伺服器包括PCIE模組,如前描述的,在伺服器上運行VF驅動,從而使伺服器發現自身位址空間,並對位址空間進行操作。 In FIG. 4, a plurality of servers, a memory pool composed of a plurality of DRAM particles driven by PCI devices, a management unit, and a PCIE switch are shown. The server includes a PCIE module. As described above, the VF driver is run on the server, so that the server finds its own address space and operates on the address space.

其中,管理單元負責管理伺服器針對記憶體池的記憶體分配,包括三個方面:管理已經在使用的記憶體;釋放使用完成的記憶體;末使用記憶體分配。具體的,管理單元根據伺服器的記憶體分配請求,向伺服器分配請求所需要的記憶體容量,在使用完成後,再進行釋放並再分配到有需要的請求的伺服器上。 The management unit is responsible for managing the memory allocation of the server to the memory pool, including three aspects: managing the memory that is already in use; releasing the used memory; and using the memory allocation. Specifically, the management unit allocates the memory capacity required for the request to the server according to the memory allocation request of the server, and after the use is completed, releases and re-allocates the server to the server with the required request.

PCIE開關可提供多個埠,從而連接多個記憶體顆粒,因此可一次性將多個記憶體顆粒的記憶體空間分配給特定伺服器。 The PCIE switch can provide multiple turns to connect multiple memory particles, so the memory space of multiple memory particles can be allocated to a specific server at one time.

相較於現有技術中每台伺服器通過插槽接入固定數量的記憶體的方式,不但通過記憶體buffer擴展了記憶體容量,而且,可按需實現記憶體分配。例如,一台伺服器具有16根插槽,若每個記憶體條的容量是16G,即使全部插滿也只有256G容量;假設現需要300G記憶體,按照 現有方式只有增加一台伺服器,雖然這樣可以滿足記憶體需求,然而新增加伺服器的其他資源則是浪費。而通過本發明的方式,通過記憶體buffer擴展了記憶體容量,並且可根據需求設置一定數量的經PCIE設備驅動的DRAM顆粒,在伺服器提出記憶體分配請求時,可根據請求的記憶體大小,動態選取滿足所請求記憶體大小的數量的記憶體顆粒,將這部分記憶體顆粒的內容分配給伺服器,在伺服器使用完成後,再動態釋放掉這部分記憶體空間,供其他有需要的伺服器使用。 Compared with the prior art, each server accesses a fixed amount of memory through a slot, not only the memory capacity is expanded by the memory buffer, but also the memory allocation can be realized as needed. For example, a server has 16 slots. If the capacity of each memory bank is 16G, even if it is fully populated, it only has 256G capacity. Suppose now that 300G memory is needed, according to The existing method only adds one server, although this can meet the memory requirements, but the additional resources of the new server are wasted. In the manner of the present invention, the memory capacity is expanded by the memory buffer, and a certain number of DRAM particles driven by the PCIE device can be set according to requirements, and the memory size according to the request can be requested when the server requests the memory allocation. Dynamically selecting the number of memory particles that satisfy the size of the requested memory, and allocating the contents of the memory particles to the server. After the server is used, the memory space is dynamically released for other needs. The server is used.

可見,本發明通過將DRAM顆粒的介面通過可程式設計邏輯晶片(FPGA)或晶片來轉換成PCIE介面,從而實現DRAM介面到PCIE介面的轉換,然後通過PCIE設備的驅動,將分配後的PCIE設備指定到特定的計算節點後,將這個部分的PCIE位址映射到記憶體位址中,從而實現驅動所分配的那個部分的記憶體的目的;通過記憶體池的部署和管理,可為伺服器分配所請求的一定數量的記憶體空間,並對記憶體進行分配、釋放的動態管理。 It can be seen that the present invention converts the DRAM interface to the PCIE interface by converting the interface of the DRAM particles into a PCIE interface through a programmable logic chip (FPGA) or a chip, and then transfers the PCIE device through the driver of the PCIE device. After assigning to a specific compute node, the PCIE address of this part is mapped to the memory address, thereby achieving the purpose of driving the memory of the allocated portion; the server can be allocated and managed by the memory pool. A certain amount of memory space requested, and dynamic management of memory allocation and release.

本發明方案與現有技術相比,至少具有以下優點: Compared with the prior art, the solution of the invention has at least the following advantages:

(1)按需求分配記憶體 (1) Allocating memory on demand

通過PCIE設備驅動的DRAM顆粒組成的記憶體池,可以通過PCIE交換實現不同伺服器對記憶體的動態分配和按需分配。 Through the memory pool composed of DRAM particles driven by PCIE devices, the dynamic allocation and on-demand allocation of memory by different servers can be realized through PCIE exchange.

(2)記憶體與伺服器的分離 (2) Separation of memory and server

通過PCIE介面的轉換,可以實現PCIE的擴展, PCIE與標準伺服器的PCIE SLOT連接,這樣將伺服器與記憶體通過PCIE來分離。 PCIE extension can be achieved through PCIE interface conversion. The PCIE is connected to the PCIE SLOT of the standard server, which separates the server from the memory via PCIE.

(3)實現大容量記憶體 (3) Achieve large-capacity memory

通過記憶體BUFFER的擴展,相比標準記憶體,容量可以擴大很多倍。 Through the expansion of the memory BUFFER, the capacity can be expanded many times compared to the standard memory.

(4)節約成本 (4) Cost savings

由於通過記憶體buffer擴展了記憶體顆粒,且通過對記憶體池的動態分配和按需分配,因此與標準記憶體相比,成本有較低的優勢。 Since the memory particles are expanded by the memory buffer and the dynamic allocation and on-demand allocation of the memory pool, the cost is lower than that of the standard memory.

(5)可維護性增強 (5) Maintainability enhancement

相比於現有標準的記憶體必須停機維護,PCIE設備可以進行熱插拔,因此可維護性增強。 Compared to the existing standard memory, the PCIE device can be hot swapped, so the maintainability is enhanced.

本發明實施例提供一種與上述方法相對應的一種動態分配記憶體的裝置。參見圖5,該裝置包括:請求接收單元501,用於接收伺服器的記憶體分配請求;判斷單元502,用於根據所述記憶體分配請求,基於由多個經PCIE設備驅動的記憶體顆粒組成的記憶體池,判斷記憶體池是否具有一個或多個空閒的記憶體顆粒的記憶體總和滿足所請求的記憶體大小;分配單元503,用於將所請求的記憶體分配給所述伺服器。 Embodiments of the present invention provide an apparatus for dynamically allocating memory corresponding to the above method. Referring to FIG. 5, the apparatus includes: a request receiving unit 501, configured to receive a memory allocation request of a server; and a determining unit 502, configured to: based on the memory particles driven by the plurality of PCIE devices according to the memory allocation request Forming a memory pool, determining whether the memory pool has a memory sum of one or more free memory particles satisfying the requested memory size; and an allocating unit 503 for allocating the requested memory to the servo Device.

較佳的,該裝置還包括:介面轉換單元504,用於將DRAM介面通過記憶體緩 衝進行容量擴展;以及,將記憶體控制器的輸入連接DRAM顆粒,在記憶體控制器進行DDR記憶體進程到PCIE進程的轉換邏輯,使記憶體控制器的輸出為PCIE介面,使記憶體控制器的輸出為PCIE介面。 Preferably, the device further includes: an interface conversion unit 504, configured to buffer the DRAM interface through the memory The capacity expansion is performed; and the input of the memory controller is connected to the DRAM particles, and the conversion logic of the DDR memory process to the PCIE process is performed in the memory controller, so that the output of the memory controller is the PCIE interface, so that the memory is controlled. The output of the device is the PCIE interface.

較佳的,該裝置還包括:驅動單元505,用於使能PCIE設備的SRIOV功能;安裝PF驅動和VF驅動;以及,實現PCIE位址、伺服器位址與記憶體位址的映射,並將位址映射寫入所述PF驅動和VF驅動。 Preferably, the device further includes: a driving unit 505 for enabling the SRIOV function of the PCIE device; installing the PF driver and the VF driver; and implementing mapping of the PCIE address, the server address and the memory address, and The address map is written to the PF driver and the VF driver.

較佳的,該裝置還包括:記憶體池部署單元506,用於控制多台伺服器共用記憶體池的記憶體空間;在所述管理單元中運行所述PF驅動,從而將使用者空間與VF空間ID對應並進行匹配;以及,在各伺服器上運行所述VF驅動,從而使伺服器發現自身對應的位址空間並進行操作。 Preferably, the device further includes: a memory pool deployment unit 506, configured to control a plurality of servers to share a memory space of the memory pool; and the PF driver is operated in the management unit, thereby The VF space ID corresponds to and matches; and the VF driver is run on each server, so that the server finds its own corresponding address space and operates.

較佳的,判斷單元502還用於,判斷伺服器是否使用完成所分配的記憶體空間;裝置還包括:釋放單元507,用於釋放使用完成的記憶體空間。 Preferably, the determining unit 502 is further configured to: determine whether the server uses the allocated memory space; and the device further includes: a releasing unit 507, configured to release the used memory space.

較佳的,判斷單元502還用於,若判斷不具有所請求的記憶體空間,則等待,並判斷是否有新釋放的記憶體空間;若釋放的記憶體空間滿足所請求的記憶體要求,則指示分配單 元503將釋放的記憶體空間分配給所述伺服器。 Preferably, the determining unit 502 is further configured to: if it is determined that the requested memory space is not available, wait, and determine whether there is a newly released memory space; if the released memory space satisfies the requested memory requirement, Indicating an allocation slip Element 503 allocates the released memory space to the server.

此外,本發明還提供一種動態分配記憶體的系統,該系統包括:由多個經PCIE設備驅動的記憶體顆粒組成的記憶體池;一個或多個伺服器;以及,如前描述的圖5所示的動態分配記憶體的裝置。 In addition, the present invention also provides a system for dynamically allocating memory, the system comprising: a memory pool composed of a plurality of memory particles driven by a PCIE device; one or more servers; and FIG. 5 as previously described The device for dynamically allocating memory as shown.

另外,本發明還提供一種記憶體,該記憶體包括多個記憶體顆粒,其中,所述記憶體顆粒經PCIE設備驅動。 In addition, the present invention also provides a memory comprising a plurality of memory particles, wherein the memory particles are driven by a PCIE device.

需要注意的是,本發明可在軟體和/或軟體與硬體的組合體中被實施,例如,可採用專用積體電路(ASIC)、通用目的電腦或任何其他類似硬體設備來實現。在一個實施例中,本發明的軟體程式可以通過處理器執行以實現上文所述步驟或功能。同樣地,本發明的軟體程式(包括相關的資料結構)可以被儲存到電腦可讀記錄介質中,例如,RAM記憶體,磁或光驅動器或軟碟及類似設備。另外,本發明的一些步驟或功能可採用硬體來實現,例如,作為與處理器配合從而執行各個步驟或功能的電路。 It should be noted that the present invention can be implemented in a combination of software and/or software and hardware, for example, using a dedicated integrated circuit (ASIC), a general purpose computer, or any other similar hardware device. In one embodiment, the software program of the present invention may be executed by a processor to implement the steps or functions described above. Likewise, the software programs (including related material structures) of the present invention can be stored in a computer readable recording medium such as a RAM memory, a magnetic or optical drive or a floppy disk and the like. Additionally, some of the steps or functions of the present invention may be implemented in hardware, for example, as a circuit that cooperates with a processor to perform various steps or functions.

另外,本發明的一部分可被應用為電腦程式產品,例如電腦程式指令,當其被電腦執行時,通過該電腦的操作,可以調用或提供根據本發明的方法和/或技術方案。而調用本發明的方法的程式指令,可能被儲存在固定的或可移動的記錄介質中,和/或通過廣播或其他信號承載媒體中的資料流程而被傳輸,和/或被儲存在根據所述程式指令運行的電腦設備的工作記憶體中。在此,根據本發明的一個實施例包括一個裝置,該裝置包括用於儲存電腦程 式指令的記憶體和用於執行程式指令的處理器,其中,當該電腦程式指令被該處理器執行時,觸發該裝置運行基於前述根據本發明的多個實施例的方法和/或技術方案。 Additionally, a portion of the present invention can be applied as a computer program product, such as computer program instructions, which, when executed by a computer, can invoke or provide a method and/or solution in accordance with the present invention. Program instructions for invoking the method of the present invention may be stored in a fixed or removable recording medium and/or transmitted via a data stream in a broadcast or other signal bearing medium, and/or stored in a The working memory of the computer device in which the program instructions are run. Here, an embodiment in accordance with the present invention includes a device including a computer program for storing A memory of instructions and a processor for executing program instructions, wherein when the computer program instructions are executed by the processor, the apparatus is triggered to operate based on the foregoing methods and/or technical solutions in accordance with various embodiments of the present invention .

對於本領域技術人員而言,顯然本發明不限於上述示範性實施例的細節,而且在不背離本發明的精神或基本特徵的情況下,能夠以其他的具體形式實現本發明。因此,無論從哪一點來看,均應將實施例看作是示範性的,而且是非限制性的,本發明的範圍由所附申請專利範圍而不是上述說明限定,因此旨在將落在申請專利範圍的等同要件的含義和範圍內的所有變化涵括在本發明內。不應將申請專利範圍中的任何附圖標記視為限制所涉及的申請專利範圍。此外,顯然“包括”一詞不排除其他單元或步驟,單數不排除複數。系統申請專利範圍中陳述的多個單元或裝置也可以由一個單元或裝置通過軟體或者硬體來實現。第一,第二等詞語用來表示名稱,而並不表示任何特定的順序。 It is apparent to those skilled in the art that the present invention is not limited to the details of the above-described exemplary embodiments, and the present invention can be embodied in other specific forms without departing from the spirit or essential characteristics of the invention. Therefore, the embodiments are to be considered as illustrative and not restrictive, and the scope of the invention is defined by the scope of the appended claims rather than the description All changes that come within the meaning and range of equivalents of the scope of the invention are included in the invention. Any reference signs in the scope of the patent application should not be construed as limiting the scope of the claimed patent. In addition, it is to be understood that the word "comprising" does not exclude other elements or steps. A plurality of units or devices recited in the scope of the system patent application may also be implemented by a unit or device by software or hardware. The first, second, etc. words are used to denote names and do not denote any particular order.

Claims (15)

一種動態分配記憶體的方法,其包括:接收伺服器的記憶體分配請求;根據該記憶體分配請求,基於由多個經匯流排界面標準設備驅動的儲存顆粒組成的記憶體池,判斷該記憶體池是否具有一個或多個空閒的儲存顆粒的記憶體總和滿足所請求的記憶體大小;若是,將所請求的記憶體分配給該伺服器。 A method for dynamically allocating memory, comprising: receiving a memory allocation request of a server; and determining, according to the memory allocation request, the memory based on a memory pool composed of a plurality of storage particles driven by a standard device of the bus interface interface Whether the body pool has a memory sum of one or more free storage particles satisfies the requested memory size; if so, the requested memory is allocated to the server. 如申請專利範圍第1項所述的方法,其中,該儲存顆粒包括DRAM顆粒,該方法還包括:將DRAM顆粒的DRAM介面轉換成PCIE介面。 The method of claim 1, wherein the storage particles comprise DRAM particles, the method further comprising: converting the DRAM interface of the DRAM particles into a PCIE interface. 如申請專利範圍第2項所述的方法,其中,所述將DRAM顆粒的DRAM介面轉換成PCIE介面,包括:將DRAM介面通過記憶體緩衝進行容量擴展;將記憶體控制器的輸入連接DRAM顆粒,在記憶體控制器進行雙倍速率DDR記憶體進程到PCIE進程的轉換邏輯,使記憶體控制器的輸出為PCIE介面。 The method of claim 2, wherein the converting the DRAM interface of the DRAM particles into the PCIE interface comprises: expanding the capacity of the DRAM interface through the memory buffer; and connecting the input of the memory controller to the DRAM particles. In the memory controller, the conversion logic of the double-rate DDR memory process to the PCIE process is performed, so that the output of the memory controller is the PCIE interface. 如申請專利範圍第1項所述的方法,其中,通過PCIE設備驅動DRAM顆粒包括:使能PCIE設備的SRIOV功能;安裝物理功能PF驅動和虛擬功能VF驅動;實現PCIE位址、伺服器位址與記憶體位址的映射,並將位址映射寫入該PF驅動和VF驅動。 The method of claim 1, wherein the driving the DRAM particles by the PCIE device comprises: enabling the SRIOV function of the PCIE device; installing the physical function PF driver and the virtual function VF driver; implementing the PCIE address and the server address Mapping to the memory address and writing the address map to the PF driver and VF driver. 如申請專利範圍第4項所述的方法,其中,還包 括:對記憶體池進行部署:由一管理單元控制多台伺服器共用記憶體池的記憶體空間;在該管理單元中運行該PF驅動,從而將使用者空間與虛擬功能驅動空間ID對應並進行匹配;在各伺服器上運行該虛擬功能驅動,從而使伺服器發現自身對應的位址空間並進行操作。 The method of claim 4, wherein Include: arranging the memory pool: a management unit controls a plurality of servers to share the memory space of the memory pool; running the PF driver in the management unit to associate the user space with the virtual function driving space ID and Matching is performed; the virtual function driver is run on each server, so that the server finds its corresponding address space and operates. 如申請專利範圍第1至5項任一項所述的方法,其中,還包括:判斷伺服器是否使用完成所分配的記憶體空間,若是,釋放該記憶體空間。 The method of any one of claims 1 to 5, further comprising: determining whether the server uses the allocated memory space, and if so, releasing the memory space. 如申請專利範圍第6項所述的方法,其中,若判斷不具有所請求的記憶體空間,該方法還包括:等待,並判斷是否有新釋放的記憶體空間;若釋放的記憶體空間滿足所請求的記憶體要求,則將釋放的記憶體空間分配給該伺服器。 The method of claim 6, wherein if it is determined that the requested memory space is not available, the method further comprises: waiting, and determining whether there is a newly released memory space; if the released memory space is satisfied The requested memory request allocates the freed memory space to the server. 一種動態分配記憶體的裝置,其包括:請求接收單元,用於接收伺服器的記憶體分配請求;判斷單元,用於根據該記憶體分配請求,基於由多個經匯流排界面標準設備驅動的記憶體顆粒組成的記憶體池,判斷該記憶體池是否具有一個或多個空閒的記憶體顆粒的記憶體總和滿足所請求的記憶體大小;分配單元,用於將所請求的記憶體分配給該伺服器。 An apparatus for dynamically allocating memory, comprising: a request receiving unit, configured to receive a memory allocation request of a server; and a determining unit, configured to be driven by a plurality of bus interface standard devices according to the memory allocation request a memory pool composed of memory particles, determining whether the memory pool has a memory sum of one or more free memory particles satisfying the requested memory size; and an allocation unit for allocating the requested memory to The server. 如申請專利範圍第8項所述的裝置,其中,該記憶 體顆粒包括DRAM顆粒;該裝置還包括:介面轉換單元,用於將DRAM介面通過記憶體緩衝進行容量擴展;以及,將記憶體控制器的輸入連接DRAM顆粒,在記憶體控制器進行DDR記憶體進程到PCIE進程的轉換邏輯,使記憶體控制器的輸出為PCIE介面,使記憶體控制器的輸出為PCIE介面。 The device of claim 8, wherein the memory The body particles include DRAM particles; the device further includes: an interface conversion unit for expanding the capacity of the DRAM interface through the memory buffer; and connecting the input of the memory controller to the DRAM particles, and performing DDR memory on the memory controller The conversion logic of the process to the PCIE process causes the output of the memory controller to be a PCIE interface, so that the output of the memory controller is a PCIE interface. 如申請專利範圍第8項所述的裝置,其中,還包括:驅動單元,用於使能PCIE設備的SRIOV功能;安裝PF驅動和VF驅動;以及,實現PCIE位址、伺服器位址與記憶體位址的映射,並將位址映射寫入該PF驅動和VF驅動。 The device of claim 8, further comprising: a driving unit for enabling the SRIOV function of the PCIE device; installing the PF driver and the VF driver; and implementing the PCIE address, the server address and the memory The mapping of the body address and the address mapping is written to the PF driver and the VF driver. 如申請專利範圍第10項所述的裝置,其中,還包括:記憶體池部署單元,用於控制多台伺服器共用記憶體池的記憶體空間;在該管理單元中運行該PF驅動,從而將使用者空間與VF空間ID對應並進行匹配;以及,在各伺服器上運行該VF驅動,從而使伺服器發現自身對應的位址空間並進行操作。 The device of claim 10, further comprising: a memory pool deployment unit, configured to control a memory space of the plurality of servers sharing the memory pool; and operating the PF driver in the management unit, thereby The user space is matched and matched with the VF space ID; and the VF driver is run on each server, so that the server finds its own corresponding address space and operates. 如申請專利範圍第8至10項任一項所述的裝置,其中,該判斷單元還用於,判斷伺服器是否使用完成所分配的記憶體空間;該裝置還包括:釋放單元,用於釋放使用完成的記憶 體空間。 The device of any one of claims 8 to 10, wherein the determining unit is further configured to: determine whether the server uses the allocated memory space; the device further includes: a releasing unit, configured to release Use completed memory Body space. 如申請專利範圍第12項所述的裝置,其中,該判斷單元還用於,若判斷不具有所請求的記憶體空間,則等待,並判斷是否有新釋放的記憶體空間;若釋放的記憶體空間滿足所請求的記憶體要求,則指示該分配單元將釋放的記憶體空間分配給該伺服器。 The device of claim 12, wherein the determining unit is further configured to: if it is determined that the requested memory space is not present, wait, and determine whether there is a newly released memory space; if the released memory The body space satisfies the requested memory requirement, and the allocation unit is instructed to allocate the released memory space to the server. 一種動態分配記憶體的系統,其中,包括:由多個經匯流排界面標準設備驅動的記憶體顆粒組成的記憶體池;一個或多個伺服器;以及,申請專利範圍第8-13項所述的任一項的該動態分配記憶體的裝置。 A system for dynamically allocating memory, comprising: a memory pool composed of a plurality of memory particles driven by a bus interface standard device; one or more servers; and, Patent Application Nos. 8-13 The device for dynamically allocating memory of any of the above. 一種記憶體,其中,該記憶體包括多個記憶體顆粒,其中,該記憶體顆粒經匯流排界面標準設備驅動。 A memory, wherein the memory comprises a plurality of memory particles, wherein the memory particles are driven by a busbar interface standard device.
TW106108022A 2016-04-20 2017-03-10 Method, device and system for dynamically allocating memory TWI795354B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610249100.7 2016-04-20
CN201610249100.7A CN107305506A (en) 2016-04-20 2016-04-20 The method of dynamic assigning memory, apparatus and system

Publications (2)

Publication Number Publication Date
TW201738754A true TW201738754A (en) 2017-11-01
TWI795354B TWI795354B (en) 2023-03-11

Family

ID=60115560

Family Applications (1)

Application Number Title Priority Date Filing Date
TW106108022A TWI795354B (en) 2016-04-20 2017-03-10 Method, device and system for dynamically allocating memory

Country Status (3)

Country Link
CN (1) CN107305506A (en)
TW (1) TWI795354B (en)
WO (1) WO2017181853A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI811269B (en) * 2018-04-18 2023-08-11 韓商愛思開海力士有限公司 Computing system and data processing system including a computing system

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109542346A (en) * 2018-11-19 2019-03-29 深圳忆联信息系统有限公司 Dynamic data cache allocation method, device, computer equipment and storage medium
CN110704084A (en) * 2019-09-27 2020-01-17 深圳忆联信息系统有限公司 Method and device for dynamically allocating memory in firmware upgrade, computer equipment and storage medium
CN111212150A (en) * 2020-04-21 2020-05-29 成都甄识科技有限公司 Optical fiber reflection shared memory device
CN113672376A (en) * 2020-05-15 2021-11-19 浙江宇视科技有限公司 Server memory resource allocation method and device, server and storage medium
CN111858038A (en) * 2020-06-30 2020-10-30 浪潮电子信息产业股份有限公司 Method, device and medium for reading memory data of FPGA (field programmable Gate array) board card
CN112817766B (en) * 2021-02-22 2024-01-30 北京青云科技股份有限公司 Memory management method, electronic equipment and medium
CN113194161B (en) * 2021-04-26 2022-07-08 山东英信计算机技术有限公司 Method and device for setting MMIO base address of server system
CN115480908A (en) * 2021-06-15 2022-12-16 华为技术有限公司 Memory pooling method and related device
CN113868155B (en) * 2021-11-30 2022-03-08 苏州浪潮智能科技有限公司 Memory space expansion method and device, electronic equipment and storage medium
CN117453385A (en) * 2022-07-19 2024-01-26 华为技术有限公司 Memory allocation method, device and computer

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8782371B2 (en) * 2008-03-31 2014-07-15 Konica Minolta Laboratory U.S.A., Inc. Systems and methods for memory management for rasterization
US9311122B2 (en) * 2012-03-26 2016-04-12 Oracle International Corporation System and method for providing a scalable signaling mechanism for virtual machine migration in a middleware machine environment
CN103870333B (en) * 2012-12-17 2017-08-29 华为技术有限公司 A kind of global memory's sharing method, device and a kind of communication system
CN103593243B (en) * 2013-11-01 2017-05-10 浪潮电子信息产业股份有限公司 Dynamic extensible trunked system for increasing virtual machine resources
CN104793999A (en) * 2014-01-21 2015-07-22 航天信息股份有限公司 Servo server framework system
CA2941702A1 (en) * 2014-03-08 2015-09-17 Diamanti, Inc. Methods and systems for converged networking and storage
CN105094985A (en) * 2015-07-15 2015-11-25 上海新储集成电路有限公司 Low-power-consumption data center for sharing memory pool and working method thereof

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI811269B (en) * 2018-04-18 2023-08-11 韓商愛思開海力士有限公司 Computing system and data processing system including a computing system
US11768710B2 (en) 2018-04-18 2023-09-26 SK Hynix Inc. Computing system and data processing system including a computing system
US11829802B2 (en) 2018-04-18 2023-11-28 SK Hynix Inc. Computing system and data processing system including a computing system

Also Published As

Publication number Publication date
TWI795354B (en) 2023-03-11
WO2017181853A1 (en) 2017-10-26
CN107305506A (en) 2017-10-31

Similar Documents

Publication Publication Date Title
TWI795354B (en) Method, device and system for dynamically allocating memory
US9760497B2 (en) Hierarchy memory management
US10339047B2 (en) Allocating and configuring persistent memory
US20180004659A1 (en) Cribbing cache implementing highly compressible data indication
US20220334975A1 (en) Systems and methods for streaming storage device content
US8918568B2 (en) PCI express SR-IOV/MR-IOV virtual function clusters
EP3060993A1 (en) Final level cache system and corresponding method
CN110275840B (en) Distributed process execution and file system on memory interface
US11029847B2 (en) Method and system for shared direct access storage
US11157191B2 (en) Intra-device notational data movement system
US20230144038A1 (en) Memory pooling bandwidth multiplier using final level cache system
WO2018113030A1 (en) Technology to implement bifurcated non-volatile memory express driver
WO2021139733A1 (en) Memory allocation method and device, and computer readable storage medium
US10936219B2 (en) Controller-based inter-device notational data movement system
US11281612B2 (en) Switch-based inter-device notational data movement system
US20220229575A1 (en) Dynamic multilevel memory system
US20230139729A1 (en) Method and apparatus to dynamically share non-volatile cache in tiered storage
US20230359578A1 (en) Computing system including cxl switch, memory device and storage device and operating method thereof
TW202340931A (en) Direct swap caching with noisy neighbor mitigation and dynamic address range assignment
US20190303316A1 (en) Hardware based virtual memory management
KR102353930B1 (en) Disaggregated memory appliance