WO2017181853A1 - Procédé, dispositif et système d'attribution dynamique de mémoire - Google Patents

Procédé, dispositif et système d'attribution dynamique de mémoire Download PDF

Info

Publication number
WO2017181853A1
WO2017181853A1 PCT/CN2017/079715 CN2017079715W WO2017181853A1 WO 2017181853 A1 WO2017181853 A1 WO 2017181853A1 CN 2017079715 W CN2017079715 W CN 2017079715W WO 2017181853 A1 WO2017181853 A1 WO 2017181853A1
Authority
WO
WIPO (PCT)
Prior art keywords
memory
server
pcie
space
dram
Prior art date
Application number
PCT/CN2017/079715
Other languages
English (en)
Chinese (zh)
Inventor
牛功彪
张夏涛
邹巍
张文涛
蔡进
李舒
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Publication of WO2017181853A1 publication Critical patent/WO2017181853A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1668Details of memory controller

Definitions

  • the present invention relates to the field of computer technologies, and in particular, to a method, an apparatus, and a system for dynamically allocating memory.
  • DIMMs Dual-Inline-Memory-Modules
  • DIP Dual Inline Package
  • the early memory particles were soldered directly to the motherboard so that if a piece of memory fails, the entire motherboard is scrapped. Later, a memory grain slot appeared on the motherboard, so that the memory particles could be replaced.
  • the more commonly used memory particles are DRAM (Dynamic Random Access Memory) chips.
  • a DIMM includes one or more DRAM chips on a small integrated circuit board that can be directly connected to a computer motherboard using pins on the board.
  • DIMMs are often used in server systems. Due to the large capacity limitations of DIMMs, from 8G, 16G, 32G, 64G to 128G, the single capacity is small, and the lack of flexibility relative to rapid business changes. The current practice is to insert 16 DIMMs in a single server (single machine), or extend the DIMM through the RAZER card to achieve the purpose of expanding the memory capacity.
  • the DIMM is limited by the DIMM CHANNEL of the CPU, there is a limitation in capacity, which results in a server of different specifications due to the capacity requirement, which brings management inconvenience and overhead.
  • One of the technical problems solved by the present invention is to provide a method, device and system for dynamically allocating memory.
  • a method for dynamically allocating memory includes: receiving a memory allocation request of at least one server; and based on the memory allocation request, based on being driven by a plurality of bus interface standard devices A memory pool composed of memory particles, determining whether the memory pool has a memory sum of one or more free memory particles satisfying the requested memory size; if so, allocating the requested memory to the server.
  • the memory particles comprise DRAM particles
  • the method further comprises: DRAM of DRAM particles
  • the interface is converted to a PCIE interface.
  • converting the DRAM interface of the DRAM particle into a PCIE interface comprises: expanding a capacity of the DRAM interface through a memory buffer; connecting the input of the memory controller to the DRAM particle, and performing a double rate DDR memory process on the memory controller to The conversion logic of the PCIE process makes the output of the memory controller a PCIE interface.
  • driving the DRAM particles by the PCIE device comprises: enabling the SRIOV function of the PCIE device; installing the physical function PF driver and the virtual function VF driver; implementing mapping of the PCIE address, the server address and the memory address, and writing the address mapping into the PF drive and VF drive.
  • the method further includes: deploying the memory pool: controlling, by a management unit, a memory space of the shared memory pool of the plurality of servers; running the PF driver in the management unit, thereby driving the user space and the virtual function driving space
  • the IDs correspond to and match; the virtual function driver is run on each server, so that the server finds its own corresponding address space and operates.
  • the method further comprises: determining whether the server uses the allocated memory space, and if so, releasing the memory space.
  • the method further includes: waiting, and determining whether there is a newly released memory space; if the released memory space satisfies the requested memory requirement, the released memory space is allocated. Give the server.
  • an apparatus for dynamically allocating memory includes: a request receiving unit, configured to receive a memory allocation request to a server; and a determining unit, configured to perform, according to the memory allocation request, based on a memory pool composed of a plurality of memory elements driven by a standard device of the bus interface, determining whether the memory pool has a memory sum of one or more free memory particles satisfying the requested memory size; and an allocation unit for using the requested Memory is allocated to the server.
  • the memory particles comprise DRAM particles; the device further comprises: an interface conversion unit for expanding the capacity of the DRAM interface through the memory buffer; and connecting the input of the memory controller to the DRAM particles, and performing DDR on the memory controller
  • the conversion logic of the memory process to the PCIE process causes the output of the memory controller to be a PCIE interface, so that the output of the memory controller is a PCIE interface.
  • the apparatus further includes: a driving unit configured to enable the SRIOV function of the PCIE device; installing the PF driver and the VF driver; and implementing mapping of the PCIE address, the server address, and the memory address, and writing the address mapping to the PF drive and VF drive.
  • a driving unit configured to enable the SRIOV function of the PCIE device
  • installing the PF driver and the VF driver and implementing mapping of the PCIE address, the server address, and the memory address, and writing the address mapping to the PF drive and VF drive.
  • the device further includes: a memory pool deployment unit, configured to set a management unit to control a memory space of the plurality of server shared memory pools; and run the PF driver in the management unit to thereby user space and VF space
  • the IDs correspond to and match; and the VF driver is run on each server, so that the server finds its own corresponding address space and operates.
  • the determining unit is further configured to: determine whether the server uses the allocated memory space; and the device further includes: a releasing unit, configured to release the used memory space.
  • the determining unit is further configured to: if it is determined that the requested memory space is not available, wait, and determine whether there is a newly released memory space; if the released memory space satisfies the requested memory requirement, indicate the The allocation unit allocates the freed memory space to the server.
  • a system for dynamically allocating memory comprising: a memory pool composed of a plurality of DRAM particles driven by a PCIE device; one or more servers; and, An apparatus for dynamically allocating memory.
  • a memory that includes a plurality of memory particles, wherein the memory particles are driven by a bus interface standard device.
  • the present invention separates the server and the memory through the PCIE through a memory pool composed of a plurality of DRAM particles driven by the PCIE device, and can realize dynamic allocation and on-demand allocation of memory by different servers through PCIE exchange.
  • the capacity expansion is performed by the memory buffer during the process of converting the interface of the DRAM particles into the PCIE interface.
  • memory granules are expanded by memory buffering, and through dynamic allocation and on-demand allocation of memory pools, there is no need to increase the entire memory stick compared to standard memory, so the cost has a lower advantage.
  • PCIE devices can be hot swapped compared to existing standard memory, and maintainability is enhanced.
  • FIG. 1 is a flow chart of a method of dynamically allocating memory according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of converting a set of DRAM interfaces into PCIE interfaces in a method for dynamically allocating memory according to an embodiment of the invention
  • FIG. 3 is a schematic diagram of a single-particle DRAM interface converted into a PCIE interface in a method for dynamically allocating memory according to an embodiment of the present invention
  • FIG. 4 is a schematic diagram of a DRAM pool deployment based on a PCIE interface in a method for dynamically allocating memory according to an embodiment of the present invention
  • FIG. 5 is a schematic structural diagram of an apparatus for dynamically allocating memory according to an embodiment of the present invention.
  • the computer device includes a user device and a network device.
  • the user equipment includes, but is not limited to, a computer, a smart phone, a PDA, etc.
  • the network device includes but is not limited to a single network server, a server group composed of multiple network servers, or a cloud computing based computer Or a cloud composed of a network server, wherein cloud computing is a type of distributed computing, a super virtual computer composed of a group of loosely coupled computers.
  • the computer device can be operated separately to implement the present invention, and can also access the network and implement the present invention by interacting with other computer devices in the network.
  • the network in which the computer device is located includes, but is not limited to, the Internet, a wide area network, a metropolitan area network, a local area network, a VPN network, and the like.
  • the user equipment, the network equipment, the network, and the like are merely examples, and other existing or future possible computer equipment or networks, such as those applicable to the present invention, are also included in the scope of the present invention. It is included here by reference.
  • Memory granules Also known as memory chips or memory chips, often referred to as memory cells.
  • DRAM granules DRAM chips are used as memory granules.
  • a PCIE (Peripheral Component Interface Express) device refers to a device that supports the PCIE protocol.
  • the SRIOV feature allows efficient sharing of PCIE devices between devices/virtual machines.
  • VF (Virtual Function) driver is mainly used to discover PCIE devices.
  • the PF (Physical Function) driver is mainly used to manage the correspondence between the address of each user space in the memory and the ID of the VF.
  • FIG. 1 is a flow chart of a method of dynamically allocating memory in accordance with an embodiment of the present invention.
  • the method of this embodiment mainly includes the following steps:
  • S120 Determine, according to a memory allocation request, whether the memory pool has one or more free memory particles to satisfy the requested memory space, based on a memory pool composed of a plurality of memory particles driven by the PCIE device.
  • step S130 is performed; if there is no requested memory space, step S140 is performed.
  • S150 Determine whether there is a newly released memory space, and determine whether the released memory space satisfies the requested memory requirement.
  • the above memory pool is composed of a plurality of memory particles driven by PCIE devices, wherein the memory particles may be DRAM particles. This means that interface conversion and protocol conversion are required for DRAM particles, and the interface-converted DRAM particles need to be driven by PCIE devices, and the DRAM pool based on the PCIE interface needs to be deployed.
  • FIG. 2 is a schematic diagram of converting a set of DRAM interfaces into PCIE interfaces in a method for dynamically allocating memory according to an embodiment of the invention.
  • interface conversion and protocol conversion functions can be implemented by means of the FPGA.
  • the process of transferring a DRAM interface to a PCIE interface includes:
  • Step 1 The DRAM interface is capacity-expanded by MEMORY BUFFER.
  • the memory mentioned in the present invention refers to server memory.
  • server memory is also memory, and it has no obvious substantive difference between the appearance and the structure of the ordinary PC. It mainly introduces some new technologies into the memory.
  • server memory can be divided into Buffer memory with cache and Unbuffer memory without cache.
  • Buffer is a cache, which can also be understood as a cache.
  • the capacity is mostly 64K.
  • ECC Error Checking and Correcting
  • Register is the register or directory register.
  • the role of memory can be understood as the book directory. Through Register, when the memory is connected to the read and write instructions, this directory will be retrieved first, then read and write operations, which will greatly improve the server memory work. effectiveness.
  • Memory with Register (register or directory register) must have a Buffer, and the Register memory that can be seen at present also has ECC function.
  • LRDIMMs Load-Reduced DIMMs
  • the Registered DIMM used by the server boosts the memory support capacity by buffering the signal on the memory stick and relocating the memory granules.
  • the LRDIMM memory changes the Register chip on the current RDIMM memory to an iMB (isolation Memory Buffer).
  • the memory isolation buffer chip reduces the load on the memory bus and further increases the memory support capacity accordingly.
  • the DIMMs of the present invention are not limited in type, i.e., the DIMMs of the present invention cover current or future types of DIMMs. Also, the memory capacity is increased by the DIMM's Memory Buffer function.
  • step (1) by connecting the high-speed IO pin of the FPGA to the DIMM interface and then defining these pins, the internal logic implementation simulates a MEMORY controller internally by the signals of these high-speed pins.
  • DRAM has the advantages of low power consumption, high integration (large single-chip capacity), and low price, but the control of DRAM is relatively complicated and requires timing refresh, so it is necessary to design a DRAM controller.
  • the FPGA Field Programmable Gate Array
  • MEMORY controller DRAM controller
  • the FPGA is a CMOS process, its power consumption is very small, and at the same time, the FPGA can be rewritten to facilitate performance expansion. If necessary, simply change the internal logic of the FPGA to suit different design requirements or environmental requirements.
  • other programmable logic devices can be implemented, such as a CPLD (Complex Programmable Logic Device), a PLD (Programmable Logic Device), and the like.
  • Step 2 The input of the MEMORY controller is connected to the DRAM pellet, so the input of the MEMORY controller is a DDR unit supporting a DDR (double data rate) process.
  • Step 3 The exit of the MEMORY controller is a high-speed SERDES (SERializer/DESerializer) that supports NVME (Non-Volatile Memory Express) and PCIE (Peripheral Component Interface Express).
  • the deserializer matches the PCIE interface of the PCIE device.
  • DDR memory is taken as an example for description, which has the advantage of a fast transmission rate.
  • NVMe is a logical device interface standard like AHCI. It is a specification for SSDs using PCI-E channels. NVMe is designed to take full advantage of the low latency and parallelism of PCI-E SSD, as well as contemporary processing. Parallelism of devices, platforms and applications. The parallelism of SSD can be fully utilized by the hardware and software of the host. Compared with the current AHCI standard, the NVMe standard can bring various performance improvements.
  • PCIE is a high-speed serial point-to-point dual-channel high-bandwidth transmission.
  • the connected devices allocate exclusive channel bandwidth and do not share bus bandwidth. They mainly support active power management, error reporting, end-to-end reliability transmission, hot swap, and quality of service ( QOS) and other functions.
  • QOS quality of service
  • the main advantage of PCIE is the high data transfer rate. For example, the current highest 16X 2.0 version can reach 10GB/s, and there is considerable potential for development.
  • the DDR, PCIE and NVME processes/protocols need to implement a complete set of logic for the logical translation of the hardware language. In this example, it is implemented inside the FPGA.
  • the FPGA In order to convert the DRAM interface to PCIE interface conversion, the FPGA needs to identify DDR, PCIE and NVME and perform logic conversion.
  • the FPGA internally includes a DDR unit, an NVMe unit, and a PCIE unit, wherein the DDR unit serves as an input, supports DDR Process, and is connected to a unified interface of a group of DRAM particles; the NVMe unit supports the NVM protocol, and connects the DDR unit and the PCIE unit.
  • the PCIE unit supports the PCIE protocol, which acts as an FPGA output and provides a PCIE interface to the PCIe device.
  • a set of DRAM interfaces can be converted into a PCIE interface, for example, converted to a PCIE X8 interface.
  • FIG. 3 a schematic diagram of a single-particle DRAM interface converted into a PCIE interface in a method for dynamically allocating memory according to an embodiment of the present invention.
  • the difference between the mode of Figure 3 and the mode of Figure 2 is that instead of changing the interface of a group of DRAM particles, the interface of a single DRAM particle is changed, both of which can achieve the same purpose.
  • the process of transferring a DRAM interface to a PCIE interface includes:
  • Step 1 The DRAM interface is capacity-expanded by MEMORY BUFFER.
  • Step 2 The input of the MEMORY controller is connected to the DRAM pellet, so the input of the MEMORY controller is a DDR unit supporting the DDR (double data rate) protocol.
  • Step 3 The exit of the MEMORY controller is a PCIE unit supporting PCIE (Peripheral Component Interface Express) to match the PCIE interface of the PCIE device.
  • PCIE Peripheral Component Interface Express
  • the above interface conversion can be implemented by means of a wafer.
  • the conversion of the DRAM interface to the PCIE interface is completed in the DRAM chip package, that is, the interface of the DRAM secondary package is a PCIE interface chip.
  • the DRAM particles based on the PCIE interface have been obtained, and then the DRAM based on the PCIE interface needs to be driven, thereby sharing the memory capacity or the single server for multiple servers. Prepare for sharing the memory capacity of multiple virtual machines.
  • SRIOV technology is a hardware-based virtualization solution that improves performance and scalability.
  • the SRIOV standard allows for efficient sharing of PCIE devices between virtual machines, and it is implemented in hardware to achieve I/O performance comparable to native performance.
  • the SRIOV specification defines a new standard by which new devices are created that allow virtual machines to be directly connected to I/O devices.
  • a single I/O resource can be shared by many virtual machines. Shared devices will provide dedicated resources and also use shared common resources. This way, each virtual machine has access to a unique resource. Therefore, a PCIE device with SRIOV enabled and with appropriate hardware and OS support can be displayed as multiple separate physical devices, each with its own PCIE configuration space.
  • the two main functions in SRIOV are: (1) Physical Function (PF): PCI function to support SRIOV function, as defined in the SRIOV specification.
  • the PF contains the SRIOV functional structure for managing SRIOV functions.
  • PF is a full-featured PCIE feature that can be discovered, managed, and processed just like any other PCIE device.
  • PF has fully configured resources that can be used to configure or control PCIE devices.
  • Virtual Function (VF) A function associated with a physical function.
  • VF is a lightweight PCIE feature that shares one or more physical resources with physical functions and other VFs associated with the same physical function. VF only allows configuration resources for its own behavior.
  • driving the DRAM based on the PCIE interface mainly includes the following processes:
  • This feature enable indicates that this PCIE device supports the SRIOV function and can be discovered at the OS level.
  • This driver is mainly used to manage the correspondence between the address of each user space in the memory and the ID of the VF. That is, the PF can see the address corresponding to the ID of all the space of the PCIE device. Manage.
  • a driver installed in a virtual machine mainly used to discover PCIE devices.
  • a memory pool composed of a plurality of DRAM particles based on the PCIE interface is obtained, and the memory pool needs to be properly deployed and managed to efficiently allocate memory.
  • the deployment of the DRAM pool based on the PCIE interface mainly includes the following processes:
  • the PF driver is running in the management unit, which is to match the user space with the space ID of the VF and perform flexible online matching.
  • the VF runs on each server, and the server finds the address space that it sees and can operate the space.
  • FIG. 4 is a schematic diagram of a DRAM pool deployment based on a PCIE interface in a method for dynamically allocating memory according to an embodiment of the present invention.
  • FIG. 4 a plurality of servers, a memory pool composed of a plurality of DRAM particles driven by PCI devices, a management unit, and a PCIE switch are shown.
  • the server includes a PCIE module.
  • the VF driver is run on the server, so that the server discovers its own address space and operates on the address space.
  • the management unit is responsible for managing the memory allocation of the server to the memory pool, including three aspects: managing the memory that is already in use; releasing the used memory; and not using the memory allocation. Specifically, the management unit allocates the memory capacity required for the request to the server according to the memory allocation request of the server, and after the usage is completed, releases and re-allocates to the server with the required request.
  • the PCIE switch provides multiple ports to connect multiple memory granules, so you can allocate memory space for multiple memory granules to a specific server at once.
  • each server accesses a fixed amount of memory through a slot, not only expands the memory capacity through the memory buffer, but also realizes memory allocation as needed.
  • a server has 16 slots. If the capacity of each memory module is 16G, even if it is fully populated, it only has 256G capacity. If it is required to have 300G memory, only one server is added according to the existing method, although this can be satisfied. Memory requirements, however new server additions Other resources are wasted.
  • the memory capacity is expanded by the memory buffer, and a certain number of DRAM particles driven by the PCIE device can be set according to requirements.
  • the server makes a memory allocation request, the memory can be dynamically selected according to the requested memory size. The memory granules requesting the amount of memory size, the content of the part of the memory granules is allocated to the server, and after the server is used, the memory space is dynamically released for use by other servers in need.
  • the present invention converts the DRAM interface to the PCIE interface by converting the interface of the DRAM particles into a PCIE interface through a programmable logic chip (FPGA) or a chip, and then designating the allocated PCIE device through the driver of the PCIE device.
  • FPGA programmable logic chip
  • the PCIE address of this part is mapped to the memory address, thereby realizing the purpose of driving the allocated portion of the memory; through the deployment and management of the memory pool, the server can be allocated a certain number of requests. Memory space, and dynamic management of memory allocation and release.
  • the solution of the invention has at least the following advantages:
  • PCIE expansion can be achieved through the conversion of the PCIE interface.
  • the PCIE is connected to the PCIE SLOT of the standard server, so that the server and the memory are separated by PCIE.
  • the capacity can be expanded many times compared to the standard memory.
  • the cost is lower than that of the standard memory.
  • PCIE devices can be hot swapped compared to existing standard memory, and maintainability is enhanced.
  • An embodiment of the present invention provides an apparatus for dynamically allocating memory corresponding to the foregoing method.
  • the device includes:
  • the request receiving unit 501 is configured to receive a memory allocation request of the server
  • the determining unit 502 is configured to determine, according to the memory allocation request, based on a memory pool composed of a plurality of memory particles driven by the PCIE device, whether the memory pool has one or more free memory particles and the memory sum meets the request The size of the memory;
  • the allocating unit 503 is configured to allocate the requested memory to the server.
  • the device further comprises:
  • the interface conversion unit 504 is configured to expand the capacity of the DRAM interface through the memory buffer; and connect the input of the memory controller to the DRAM particles, and perform the conversion logic of the DDR memory process to the PCIE process in the memory controller to make the output of the memory controller For the PCIE interface, the output of the memory controller is the PCIE interface.
  • the device further comprises:
  • the driving unit 505 is configured to enable the SRIOV function of the PCIE device; install the PF driver and the VF driver; and implement mapping of the PCIE address, the server address and the memory address, and write the address mapping to the PF driver and the VF driver.
  • the device further comprises:
  • the memory pool deployment unit 506 is configured to set a management unit to control a memory space of the shared memory pool of the plurality of servers; and run the PF driver in the management unit to match and match the user space with the VF space ID; and The VF driver is run on each server, so that the server finds its own address space and operates.
  • the determining unit 502 is further configured to: determine whether the server uses the allocated memory space;
  • the device further includes a release unit 507 for releasing the used memory space.
  • the determining unit 502 is further configured to: if it is determined that the requested memory space is not available, wait, and determine whether there is a newly released memory space; if the released memory space satisfies the requested memory requirement, the indicating allocation unit 503 is released.
  • the memory space is allocated to the server.
  • the present invention also provides a system for dynamically allocating memory, the system comprising: a memory pool composed of a plurality of memory particles driven by PCIE devices; one or more servers; and, as shown in FIG. 5 described above A device that dynamically allocates memory.
  • the present invention also provides a memory including a plurality of memory particles, wherein the memory particles are driven by a PCIE device.
  • the present invention can be implemented in software and/or a combination of software and hardware, for example, using an application specific integrated circuit (ASIC), a general purpose computer, or any other similar hardware device.
  • the software program of the present invention may be executed by a processor to implement the steps or functions described above.
  • the inventive software program (including related data structures) can be stored in a computer readable recording medium such as a RAM memory, a magnetic or optical drive or a floppy disk and the like.
  • some of the steps or functions of the present invention may be implemented in hardware, for example, as a circuit that cooperates with a processor to perform various steps or functions.
  • a portion of the invention can be applied as a computer program product, such as computer program instructions, which, when executed by a computer, can invoke or provide a method and/or solution in accordance with the present invention.
  • the program instructions for invoking the method of the present invention may be stored in a fixed or removable recording medium and/or transmitted by a data stream in a broadcast or other signal bearing medium, and/or stored in a The working memory of the computer device in which the program instructions are run.
  • an embodiment in accordance with the present invention includes a device including a memory for storing computer program instructions and a processor for executing program instructions, wherein when the computer program instructions are executed by the processor, triggering
  • the apparatus operates based on the aforementioned methods and/or technical solutions in accordance with various embodiments of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Dram (AREA)
  • Hardware Redundancy (AREA)

Abstract

L'invention concerne un procédé, un dispositif et un système d'attribution dynamique de mémoire. Le procédé consiste à : recevoir une requête d'attribution de mémoire provenant d'un serveur (S110); déterminer, en fonction de la requête d'attribution de mémoire, et sur la base d'un pool de mémoire comprenant une pluralité de puces de mémoire actionnées par un appareil PCIE, si une taille de mémoire disponible totale d'une ou de plusieurs puces de mémoire satisfait à une taille de mémoire demandée (S120); et si tel est le cas, attribuer au serveur la taille de mémoire demandée (S130). Le procédé, le dispositif et le système permettent de réaliser une attribution de mémoire dynamique pour un serveur.
PCT/CN2017/079715 2016-04-20 2017-04-07 Procédé, dispositif et système d'attribution dynamique de mémoire WO2017181853A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610249100.7 2016-04-20
CN201610249100.7A CN107305506A (zh) 2016-04-20 2016-04-20 动态分配内存的方法、装置及系统

Publications (1)

Publication Number Publication Date
WO2017181853A1 true WO2017181853A1 (fr) 2017-10-26

Family

ID=60115560

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/079715 WO2017181853A1 (fr) 2016-04-20 2017-04-07 Procédé, dispositif et système d'attribution dynamique de mémoire

Country Status (3)

Country Link
CN (1) CN107305506A (fr)
TW (1) TWI795354B (fr)
WO (1) WO2017181853A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111858038A (zh) * 2020-06-30 2020-10-30 浪潮电子信息产业股份有限公司 Fpga板卡内存数据的读取方法、装置及介质
CN113051066A (zh) * 2019-12-27 2021-06-29 阿里巴巴集团控股有限公司 内存管理方法、装置、设备及存储介质
CN113194161A (zh) * 2021-04-26 2021-07-30 山东英信计算机技术有限公司 一种服务器系统mmioh基地址的设置方法、装置
CN113868155A (zh) * 2021-11-30 2021-12-31 苏州浪潮智能科技有限公司 一种内存空间扩展方法、装置及电子设备和存储介质

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102545228B1 (ko) * 2018-04-18 2023-06-20 에스케이하이닉스 주식회사 컴퓨팅 시스템 및 그것을 포함하는 데이터 처리 시스템
CN109542346A (zh) * 2018-11-19 2019-03-29 深圳忆联信息系统有限公司 动态数据缓存分配方法、装置、计算机设备和存储介质
CN110704084A (zh) * 2019-09-27 2020-01-17 深圳忆联信息系统有限公司 固件升级中内存动态分配的方法、装置、计算机设备及存储介质
CN111212150A (zh) * 2020-04-21 2020-05-29 成都甄识科技有限公司 一种光纤反射共享内存装置
CN113672376A (zh) * 2020-05-15 2021-11-19 浙江宇视科技有限公司 一种服务器内存资源分配方法、装置、服务器和存储介质
CN112817766B (zh) * 2021-02-22 2024-01-30 北京青云科技股份有限公司 一种内存管理方法、电子设备及介质
CN115480908A (zh) * 2021-06-15 2022-12-16 华为技术有限公司 一种内存池化方法以及相关装置
CN117453385A (zh) * 2022-07-19 2024-01-26 华为技术有限公司 内存分配方法、装置及计算机

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090249017A1 (en) * 2008-03-31 2009-10-01 Tim Prebble Systems and Methods for Memory Management for Rasterization
CN103593243A (zh) * 2013-11-01 2014-02-19 浪潮电子信息产业股份有限公司 一种动态可扩展的增加虚拟机资源的方法
CN103870333A (zh) * 2012-12-17 2014-06-18 华为技术有限公司 一种全局内存共享方法、装置和一种通信系统
CN104793999A (zh) * 2014-01-21 2015-07-22 航天信息股份有限公司 伺服服务器架构系统
CN105094985A (zh) * 2015-07-15 2015-11-25 上海新储集成电路有限公司 一种共享内存池的低功耗数据中心及其工作方法

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9311122B2 (en) * 2012-03-26 2016-04-12 Oracle International Corporation System and method for providing a scalable signaling mechanism for virtual machine migration in a middleware machine environment
EP3117583A4 (fr) * 2014-03-08 2017-11-01 Diamanti, Inc. Procédés et systèmes pour faire converger un stockage et une mise en réseau

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090249017A1 (en) * 2008-03-31 2009-10-01 Tim Prebble Systems and Methods for Memory Management for Rasterization
CN103870333A (zh) * 2012-12-17 2014-06-18 华为技术有限公司 一种全局内存共享方法、装置和一种通信系统
CN103593243A (zh) * 2013-11-01 2014-02-19 浪潮电子信息产业股份有限公司 一种动态可扩展的增加虚拟机资源的方法
CN104793999A (zh) * 2014-01-21 2015-07-22 航天信息股份有限公司 伺服服务器架构系统
CN105094985A (zh) * 2015-07-15 2015-11-25 上海新储集成电路有限公司 一种共享内存池的低功耗数据中心及其工作方法

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113051066A (zh) * 2019-12-27 2021-06-29 阿里巴巴集团控股有限公司 内存管理方法、装置、设备及存储介质
CN111858038A (zh) * 2020-06-30 2020-10-30 浪潮电子信息产业股份有限公司 Fpga板卡内存数据的读取方法、装置及介质
US11687242B1 (en) 2020-06-30 2023-06-27 Inspur Electronic Information Industry Co., Ltd. FPGA board memory data reading method and apparatus, and medium
CN113194161A (zh) * 2021-04-26 2021-07-30 山东英信计算机技术有限公司 一种服务器系统mmioh基地址的设置方法、装置
CN113194161B (zh) * 2021-04-26 2022-07-08 山东英信计算机技术有限公司 一种服务器系统mmioh基地址的设置方法、装置
US11847086B2 (en) 2021-04-26 2023-12-19 Shandong Yingxin Computer Technologies Co., Ltd. Method and apparatus for configuring MMIOH base address of server system
CN113868155A (zh) * 2021-11-30 2021-12-31 苏州浪潮智能科技有限公司 一种内存空间扩展方法、装置及电子设备和存储介质

Also Published As

Publication number Publication date
TWI795354B (zh) 2023-03-11
TW201738754A (zh) 2017-11-01
CN107305506A (zh) 2017-10-31

Similar Documents

Publication Publication Date Title
WO2017181853A1 (fr) Procédé, dispositif et système d'attribution dynamique de mémoire
US9760497B2 (en) Hierarchy memory management
EP3060993B1 (fr) Système de mémoire cache de niveau final et procédé correspondant
US8996781B2 (en) Integrated storage/processing devices, systems and methods for performing big data analytics
US20220334975A1 (en) Systems and methods for streaming storage device content
US8918568B2 (en) PCI express SR-IOV/MR-IOV virtual function clusters
US10346342B1 (en) Uniform memory access architecture
US20190286559A1 (en) Providing Multiple Memory Modes For A Processor Including Internal Memory
CN110275840B (zh) 在存储器接口上的分布式过程执行和文件系统
US9026698B2 (en) Apparatus, system and method for providing access to a device function
US11029847B2 (en) Method and system for shared direct access storage
EP4123649A1 (fr) Module de mémoire, système le comprenant et procédé de fonctionnement du module de mémoire
US11157191B2 (en) Intra-device notational data movement system
WO2024093517A1 (fr) Procédé de gestion de mémoire et dispositif informatique
US20230350795A1 (en) Dual-port memory module design for composable computing
US20230144038A1 (en) Memory pooling bandwidth multiplier using final level cache system
KR20180023543A (ko) 시리얼 통신으로 메모리를 제공하기 위한 장치 및 방법
US10936219B2 (en) Controller-based inter-device notational data movement system
US20160077959A1 (en) System and Method for Sharing a Solid-State Non-Volatile Memory Resource
US20190303316A1 (en) Hardware based virtual memory management
US20200341925A1 (en) Switch-based inter-device notational data movement system
TW202340931A (zh) 具有雜訊鄰居緩解及動態位址範圍分配的直接交換快取
JP2023527770A (ja) メモリにおける推論

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17785338

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17785338

Country of ref document: EP

Kind code of ref document: A1